2026-01-05 00:00:06.567487 | Job console starting 2026-01-05 00:00:06.598250 | Updating git repos 2026-01-05 00:00:06.691207 | Cloning repos into workspace 2026-01-05 00:00:07.071084 | Restoring repo states 2026-01-05 00:00:07.124839 | Merging changes 2026-01-05 00:00:07.124866 | Checking out repos 2026-01-05 00:00:07.587018 | Preparing playbooks 2026-01-05 00:00:08.863840 | Running Ansible setup 2026-01-05 00:00:17.800220 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-05 00:00:21.010110 | 2026-01-05 00:00:21.010874 | PLAY [Base pre] 2026-01-05 00:00:21.043651 | 2026-01-05 00:00:21.043830 | TASK [Setup log path fact] 2026-01-05 00:00:21.099793 | orchestrator | ok 2026-01-05 00:00:21.136578 | 2026-01-05 00:00:21.136768 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 00:00:21.238561 | orchestrator | ok 2026-01-05 00:00:21.326222 | 2026-01-05 00:00:21.326429 | TASK [emit-job-header : Print job information] 2026-01-05 00:00:21.425864 | # Job Information 2026-01-05 00:00:21.426193 | Ansible Version: 2.16.14 2026-01-05 00:00:21.426239 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-05 00:00:21.426275 | Pipeline: periodic-midnight 2026-01-05 00:00:21.426297 | Executor: 521e9411259a 2026-01-05 00:00:21.426317 | Triggered by: https://github.com/osism/testbed 2026-01-05 00:00:21.426338 | Event ID: 9a1e5e94553547229e870b2662f29864 2026-01-05 00:00:21.448013 | 2026-01-05 00:00:21.448170 | LOOP [emit-job-header : Print node information] 2026-01-05 00:00:21.780703 | orchestrator | ok: 2026-01-05 00:00:21.780965 | orchestrator | # Node Information 2026-01-05 00:00:21.781003 | orchestrator | Inventory Hostname: orchestrator 2026-01-05 00:00:21.781029 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-05 00:00:21.781051 | orchestrator | Username: zuul-testbed01 2026-01-05 00:00:21.781072 | orchestrator | Distro: Debian 12.12 2026-01-05 00:00:21.781096 | orchestrator | Provider: static-testbed 2026-01-05 00:00:21.781116 | orchestrator | Region: 2026-01-05 00:00:21.781137 | orchestrator | Label: testbed-orchestrator 2026-01-05 00:00:21.781173 | orchestrator | Product Name: OpenStack Nova 2026-01-05 00:00:21.781193 | orchestrator | Interface IP: 81.163.193.140 2026-01-05 00:00:21.820336 | 2026-01-05 00:00:21.820529 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:25.274239 | orchestrator -> localhost | changed 2026-01-05 00:00:25.293079 | 2026-01-05 00:00:25.293838 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-05 00:00:30.172238 | orchestrator -> localhost | changed 2026-01-05 00:00:30.195448 | 2026-01-05 00:00:30.199955 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-05 00:00:31.288249 | orchestrator -> localhost | ok 2026-01-05 00:00:31.302976 | 2026-01-05 00:00:31.303414 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-05 00:00:31.364483 | orchestrator | ok 2026-01-05 00:00:31.407830 | orchestrator | included: /var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-05 00:00:31.445202 | 2026-01-05 00:00:31.445317 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-05 00:00:36.603905 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-05 00:00:36.604060 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/work/5fd7b39a1a694aa3b9baae85283a997d_id_rsa 2026-01-05 00:00:36.604090 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/work/5fd7b39a1a694aa3b9baae85283a997d_id_rsa.pub 2026-01-05 00:00:36.604111 | orchestrator -> localhost | The key fingerprint is: 2026-01-05 00:00:36.604133 | orchestrator -> localhost | SHA256:XNZqO0XDWBGXUtzUtjxSlYJp5RaGReWcmXisfwKhW/4 zuul-build-sshkey 2026-01-05 00:00:36.604151 | orchestrator -> localhost | The key's randomart image is: 2026-01-05 00:00:36.604179 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-05 00:00:36.604197 | orchestrator -> localhost | | @@+=*| 2026-01-05 00:00:36.604215 | orchestrator -> localhost | | X+oOoB| 2026-01-05 00:00:36.604231 | orchestrator -> localhost | | = *=+X.| 2026-01-05 00:00:36.604247 | orchestrator -> localhost | | . o +.+o+ | 2026-01-05 00:00:36.604263 | orchestrator -> localhost | | S + +.. .| 2026-01-05 00:00:36.604284 | orchestrator -> localhost | | . * .. | 2026-01-05 00:00:36.604302 | orchestrator -> localhost | | + . ...| 2026-01-05 00:00:36.604318 | orchestrator -> localhost | | . . ..| 2026-01-05 00:00:36.604335 | orchestrator -> localhost | | E | 2026-01-05 00:00:36.604353 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-05 00:00:36.604404 | orchestrator -> localhost | ok: Runtime: 0:00:03.541592 2026-01-05 00:00:36.611168 | 2026-01-05 00:00:36.611254 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-05 00:00:36.678805 | orchestrator | ok 2026-01-05 00:00:36.693007 | orchestrator | included: /var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-05 00:00:36.749665 | 2026-01-05 00:00:36.749761 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-05 00:00:36.827438 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:36.834923 | 2026-01-05 00:00:36.835019 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-05 00:00:38.334691 | orchestrator | changed 2026-01-05 00:00:38.341142 | 2026-01-05 00:00:38.341237 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-05 00:00:38.718711 | orchestrator | ok 2026-01-05 00:00:38.733628 | 2026-01-05 00:00:38.733726 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-05 00:00:39.353267 | orchestrator | ok 2026-01-05 00:00:39.367238 | 2026-01-05 00:00:39.367339 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-05 00:00:39.873677 | orchestrator | ok 2026-01-05 00:00:39.882193 | 2026-01-05 00:00:39.882296 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-05 00:00:39.937101 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:39.942871 | 2026-01-05 00:00:39.942967 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-05 00:00:41.098421 | orchestrator -> localhost | changed 2026-01-05 00:00:41.109990 | 2026-01-05 00:00:41.110082 | TASK [add-build-sshkey : Add back temp key] 2026-01-05 00:00:42.151930 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/work/5fd7b39a1a694aa3b9baae85283a997d_id_rsa (zuul-build-sshkey) 2026-01-05 00:00:42.152118 | orchestrator -> localhost | ok: Runtime: 0:00:00.025779 2026-01-05 00:00:42.158307 | 2026-01-05 00:00:42.158405 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-05 00:00:42.956923 | orchestrator | ok 2026-01-05 00:00:42.962072 | 2026-01-05 00:00:42.962159 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-05 00:00:43.007853 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:43.083028 | 2026-01-05 00:00:43.083125 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-05 00:00:43.807509 | orchestrator | ok 2026-01-05 00:00:43.825894 | 2026-01-05 00:00:43.825996 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-05 00:00:43.874178 | orchestrator | ok 2026-01-05 00:00:43.894221 | 2026-01-05 00:00:43.894324 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:45.015981 | orchestrator -> localhost | ok 2026-01-05 00:00:45.026192 | 2026-01-05 00:00:45.026289 | TASK [validate-host : Collect information about the host] 2026-01-05 00:00:46.719657 | orchestrator | ok 2026-01-05 00:00:46.744339 | 2026-01-05 00:00:46.744456 | TASK [validate-host : Sanitize hostname] 2026-01-05 00:00:46.797013 | orchestrator | ok 2026-01-05 00:00:46.805548 | 2026-01-05 00:00:46.805639 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-05 00:00:49.386436 | orchestrator -> localhost | changed 2026-01-05 00:00:49.391565 | 2026-01-05 00:00:49.391651 | TASK [validate-host : Collect information about zuul worker] 2026-01-05 00:00:50.674806 | orchestrator | ok 2026-01-05 00:00:50.679459 | 2026-01-05 00:00:50.679543 | TASK [validate-host : Write out all zuul information for each host] 2026-01-05 00:00:51.992308 | orchestrator -> localhost | changed 2026-01-05 00:00:52.004109 | 2026-01-05 00:00:52.005339 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-05 00:00:52.367281 | orchestrator | ok 2026-01-05 00:00:52.372263 | 2026-01-05 00:00:52.372348 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-05 00:02:05.428592 | orchestrator | changed: 2026-01-05 00:02:05.428829 | orchestrator | .d..t...... src/ 2026-01-05 00:02:05.428866 | orchestrator | .d..t...... src/github.com/ 2026-01-05 00:02:05.428892 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-05 00:02:05.428914 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-05 00:02:05.428936 | orchestrator | RedHat.yml 2026-01-05 00:02:05.457059 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-05 00:02:05.457077 | orchestrator | RedHat.yml 2026-01-05 00:02:05.457131 | orchestrator | = 2.2.0"... 2026-01-05 00:02:19.253271 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-05 00:02:19.273353 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-05 00:02:19.419066 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-05 00:02:20.092145 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:20.153893 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-05 00:02:20.739370 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:21.311542 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-05 00:02:22.242125 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-05 00:02:22.242195 | orchestrator | 2026-01-05 00:02:22.242203 | orchestrator | Providers are signed by their developers. 2026-01-05 00:02:22.242208 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-05 00:02:22.242222 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-05 00:02:22.242259 | orchestrator | 2026-01-05 00:02:22.242264 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-05 00:02:22.242276 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-05 00:02:22.242281 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-05 00:02:22.242292 | orchestrator | you run "tofu init" in the future. 2026-01-05 00:02:22.242700 | orchestrator | 2026-01-05 00:02:22.242746 | orchestrator | OpenTofu has been successfully initialized! 2026-01-05 00:02:22.242772 | orchestrator | 2026-01-05 00:02:22.242777 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-05 00:02:22.242782 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-05 00:02:22.242786 | orchestrator | should now work. 2026-01-05 00:02:22.242791 | orchestrator | 2026-01-05 00:02:22.242795 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-05 00:02:22.242799 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-05 00:02:22.242810 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-05 00:02:22.731028 | orchestrator | Created and switched to workspace "ci"! 2026-01-05 00:02:22.731174 | orchestrator | 2026-01-05 00:02:22.731181 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-05 00:02:22.731188 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-05 00:02:22.731195 | orchestrator | for this configuration. 2026-01-05 00:02:22.843501 | orchestrator | ci.auto.tfvars 2026-01-05 00:02:22.853269 | orchestrator | default_custom.tf 2026-01-05 00:02:24.230440 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-05 00:02:24.749489 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-05 00:02:24.964041 | orchestrator | 2026-01-05 00:02:24.964131 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-05 00:02:24.964138 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-05 00:02:24.964152 | orchestrator | + create 2026-01-05 00:02:24.964158 | orchestrator | <= read (data resources) 2026-01-05 00:02:24.964163 | orchestrator | 2026-01-05 00:02:24.964167 | orchestrator | OpenTofu will perform the following actions: 2026-01-05 00:02:24.964172 | orchestrator | 2026-01-05 00:02:24.964176 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-05 00:02:24.964181 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:24.964185 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-05 00:02:24.964189 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:24.964193 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:24.964197 | orchestrator | + file = (known after apply) 2026-01-05 00:02:24.964201 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964224 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964229 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:24.964233 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:24.964237 | orchestrator | + most_recent = true 2026-01-05 00:02:24.964241 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.964245 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:24.964248 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964255 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:24.964259 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:24.964263 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:24.964267 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:24.964271 | orchestrator | } 2026-01-05 00:02:24.964279 | orchestrator | 2026-01-05 00:02:24.964283 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-05 00:02:24.964287 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:24.964291 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-05 00:02:24.964295 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:24.964299 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:24.964302 | orchestrator | + file = (known after apply) 2026-01-05 00:02:24.964306 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964310 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964314 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:24.964318 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:24.964321 | orchestrator | + most_recent = true 2026-01-05 00:02:24.964325 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.964329 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:24.964333 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964337 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:24.964340 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:24.964344 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:24.964348 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:24.964351 | orchestrator | } 2026-01-05 00:02:24.964355 | orchestrator | 2026-01-05 00:02:24.964359 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-05 00:02:24.964363 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-05 00:02:24.964367 | orchestrator | + content = (known after apply) 2026-01-05 00:02:24.964371 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:24.964375 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:24.964379 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:24.964382 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:24.964386 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:24.964390 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:24.964393 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:24.964397 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:24.964401 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-05 00:02:24.964405 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964409 | orchestrator | } 2026-01-05 00:02:24.964415 | orchestrator | 2026-01-05 00:02:24.964418 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-05 00:02:24.964422 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-05 00:02:24.964426 | orchestrator | + content = (known after apply) 2026-01-05 00:02:24.964430 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:24.964434 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:24.964437 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:24.964441 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:24.964445 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:24.964455 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:24.964458 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:24.964462 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:24.964470 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-05 00:02:24.964474 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964478 | orchestrator | } 2026-01-05 00:02:24.964482 | orchestrator | 2026-01-05 00:02:24.964485 | orchestrator | # local_file.inventory will be created 2026-01-05 00:02:24.964489 | orchestrator | + resource "local_file" "inventory" { 2026-01-05 00:02:24.964493 | orchestrator | + content = (known after apply) 2026-01-05 00:02:24.964497 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:24.964500 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:24.964504 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:24.964508 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:24.964512 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:24.964516 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:24.964520 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:24.964523 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:24.964527 | orchestrator | + filename = "inventory.ci" 2026-01-05 00:02:24.964531 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964534 | orchestrator | } 2026-01-05 00:02:24.964541 | orchestrator | 2026-01-05 00:02:24.964544 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-05 00:02:24.964548 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-05 00:02:24.964552 | orchestrator | + content = (sensitive value) 2026-01-05 00:02:24.964556 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:24.964559 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:24.964563 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:24.964567 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:24.964570 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:24.964574 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:24.964578 | orchestrator | + directory_permission = "0700" 2026-01-05 00:02:24.964582 | orchestrator | + file_permission = "0600" 2026-01-05 00:02:24.964585 | orchestrator | + filename = ".id_rsa.ci" 2026-01-05 00:02:24.964589 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964593 | orchestrator | } 2026-01-05 00:02:24.964597 | orchestrator | 2026-01-05 00:02:24.964600 | orchestrator | # null_resource.node_semaphore will be created 2026-01-05 00:02:24.964604 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-05 00:02:24.964608 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964612 | orchestrator | } 2026-01-05 00:02:24.964616 | orchestrator | 2026-01-05 00:02:24.964619 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-05 00:02:24.964623 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-05 00:02:24.964627 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964631 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964634 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964638 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.964642 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964646 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-05 00:02:24.964649 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964653 | orchestrator | + size = 80 2026-01-05 00:02:24.964657 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.964661 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.964664 | orchestrator | } 2026-01-05 00:02:24.964671 | orchestrator | 2026-01-05 00:02:24.964674 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-05 00:02:24.964678 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:24.964682 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964686 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964689 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964696 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.964700 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964704 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-05 00:02:24.964708 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964711 | orchestrator | + size = 80 2026-01-05 00:02:24.964715 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.964719 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.964723 | orchestrator | } 2026-01-05 00:02:24.964726 | orchestrator | 2026-01-05 00:02:24.964730 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-05 00:02:24.964734 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:24.964738 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964742 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964745 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964749 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.964753 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964756 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-05 00:02:24.964760 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964764 | orchestrator | + size = 80 2026-01-05 00:02:24.964768 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.964771 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.964775 | orchestrator | } 2026-01-05 00:02:24.964781 | orchestrator | 2026-01-05 00:02:24.964785 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-05 00:02:24.964789 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:24.964792 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964796 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964800 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964804 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.964807 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964811 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-05 00:02:24.964815 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964818 | orchestrator | + size = 80 2026-01-05 00:02:24.964825 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.964829 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.964832 | orchestrator | } 2026-01-05 00:02:24.964836 | orchestrator | 2026-01-05 00:02:24.964840 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-05 00:02:24.964844 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:24.964847 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964851 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964855 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964889 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.964893 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964897 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-05 00:02:24.964900 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964904 | orchestrator | + size = 80 2026-01-05 00:02:24.964908 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.964912 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.964916 | orchestrator | } 2026-01-05 00:02:24.964922 | orchestrator | 2026-01-05 00:02:24.964926 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-05 00:02:24.964929 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:24.964933 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964937 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964941 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.964949 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.964952 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.964956 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-05 00:02:24.964960 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.964964 | orchestrator | + size = 80 2026-01-05 00:02:24.964967 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.964971 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.964975 | orchestrator | } 2026-01-05 00:02:24.964979 | orchestrator | 2026-01-05 00:02:24.964982 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-05 00:02:24.964986 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:24.964990 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.964994 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.964998 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965001 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.965005 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965009 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-05 00:02:24.965013 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965016 | orchestrator | + size = 80 2026-01-05 00:02:24.965020 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965024 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965028 | orchestrator | } 2026-01-05 00:02:24.965031 | orchestrator | 2026-01-05 00:02:24.965035 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-05 00:02:24.965039 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965043 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965046 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965050 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965054 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965058 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-05 00:02:24.965061 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965065 | orchestrator | + size = 20 2026-01-05 00:02:24.965069 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965073 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965077 | orchestrator | } 2026-01-05 00:02:24.965083 | orchestrator | 2026-01-05 00:02:24.965086 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-05 00:02:24.965090 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965094 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965098 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965101 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965105 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965109 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-05 00:02:24.965113 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965116 | orchestrator | + size = 20 2026-01-05 00:02:24.965120 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965124 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965128 | orchestrator | } 2026-01-05 00:02:24.965131 | orchestrator | 2026-01-05 00:02:24.965135 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-05 00:02:24.965139 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965142 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965146 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965150 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965154 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965157 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-05 00:02:24.965161 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965168 | orchestrator | + size = 20 2026-01-05 00:02:24.965172 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965176 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965179 | orchestrator | } 2026-01-05 00:02:24.965183 | orchestrator | 2026-01-05 00:02:24.965187 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-05 00:02:24.965191 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965194 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965198 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965202 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965209 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965213 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-05 00:02:24.965217 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965221 | orchestrator | + size = 20 2026-01-05 00:02:24.965224 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965228 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965232 | orchestrator | } 2026-01-05 00:02:24.965238 | orchestrator | 2026-01-05 00:02:24.965242 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-05 00:02:24.965246 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965249 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965253 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965257 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965260 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965264 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-05 00:02:24.965268 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965272 | orchestrator | + size = 20 2026-01-05 00:02:24.965276 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965279 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965283 | orchestrator | } 2026-01-05 00:02:24.965287 | orchestrator | 2026-01-05 00:02:24.965291 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-05 00:02:24.965294 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965298 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965302 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965306 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965309 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965313 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-05 00:02:24.965317 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965321 | orchestrator | + size = 20 2026-01-05 00:02:24.965324 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965328 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965332 | orchestrator | } 2026-01-05 00:02:24.965336 | orchestrator | 2026-01-05 00:02:24.965339 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-05 00:02:24.965343 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965347 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965350 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965354 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965358 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965362 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-05 00:02:24.965365 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965369 | orchestrator | + size = 20 2026-01-05 00:02:24.965373 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965377 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965380 | orchestrator | } 2026-01-05 00:02:24.965386 | orchestrator | 2026-01-05 00:02:24.965390 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-05 00:02:24.965394 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965400 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965404 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965408 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965411 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965415 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-05 00:02:24.965419 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965423 | orchestrator | + size = 20 2026-01-05 00:02:24.965426 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965430 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965434 | orchestrator | } 2026-01-05 00:02:24.965437 | orchestrator | 2026-01-05 00:02:24.965441 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-05 00:02:24.965445 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:24.965449 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:24.965452 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965456 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965460 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:24.965464 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-05 00:02:24.965467 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965471 | orchestrator | + size = 20 2026-01-05 00:02:24.965475 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:24.965479 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:24.965482 | orchestrator | } 2026-01-05 00:02:24.965533 | orchestrator | 2026-01-05 00:02:24.965540 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-05 00:02:24.965543 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-05 00:02:24.965547 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.965551 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.965555 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.965558 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.965562 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965566 | orchestrator | + config_drive = true 2026-01-05 00:02:24.965573 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.965577 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.965581 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-05 00:02:24.965585 | orchestrator | + force_delete = false 2026-01-05 00:02:24.965588 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.965592 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965596 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.965599 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.965603 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.965607 | orchestrator | + name = "testbed-manager" 2026-01-05 00:02:24.965611 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.965614 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965618 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.965622 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.965626 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.965629 | orchestrator | + user_data = (sensitive value) 2026-01-05 00:02:24.965633 | orchestrator | 2026-01-05 00:02:24.965637 | orchestrator | + block_device { 2026-01-05 00:02:24.965641 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.965645 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.965648 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.965652 | orchestrator | + multiattach = false 2026-01-05 00:02:24.965656 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.965659 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.965667 | orchestrator | } 2026-01-05 00:02:24.965671 | orchestrator | 2026-01-05 00:02:24.965675 | orchestrator | + network { 2026-01-05 00:02:24.965679 | orchestrator | + access_network = false 2026-01-05 00:02:24.965682 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.965686 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.965690 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.965694 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.965697 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.965701 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.965705 | orchestrator | } 2026-01-05 00:02:24.965709 | orchestrator | } 2026-01-05 00:02:24.965715 | orchestrator | 2026-01-05 00:02:24.965719 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-05 00:02:24.965722 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:24.965726 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.965730 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.965734 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.965737 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.965741 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965745 | orchestrator | + config_drive = true 2026-01-05 00:02:24.965748 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.965752 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.965756 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:24.965760 | orchestrator | + force_delete = false 2026-01-05 00:02:24.965763 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.965767 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965771 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.965775 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.965778 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.965782 | orchestrator | + name = "testbed-node-0" 2026-01-05 00:02:24.965786 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.965790 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965793 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.965797 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.965801 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.965805 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:24.965808 | orchestrator | 2026-01-05 00:02:24.965812 | orchestrator | + block_device { 2026-01-05 00:02:24.965816 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.965820 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.965823 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.965827 | orchestrator | + multiattach = false 2026-01-05 00:02:24.965831 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.965835 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.965838 | orchestrator | } 2026-01-05 00:02:24.965842 | orchestrator | 2026-01-05 00:02:24.965846 | orchestrator | + network { 2026-01-05 00:02:24.965850 | orchestrator | + access_network = false 2026-01-05 00:02:24.965853 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.965870 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.965874 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.965878 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.965881 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.965885 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.965889 | orchestrator | } 2026-01-05 00:02:24.965893 | orchestrator | } 2026-01-05 00:02:24.965899 | orchestrator | 2026-01-05 00:02:24.965903 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-05 00:02:24.965907 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:24.965911 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.965917 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.965921 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.965925 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.965928 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.965932 | orchestrator | + config_drive = true 2026-01-05 00:02:24.965936 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.965940 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.965943 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:24.965947 | orchestrator | + force_delete = false 2026-01-05 00:02:24.965951 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.965955 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.965958 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.965962 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.965966 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.965969 | orchestrator | + name = "testbed-node-1" 2026-01-05 00:02:24.965973 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.965977 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.965981 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.965984 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.965988 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.965995 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:24.965999 | orchestrator | 2026-01-05 00:02:24.966003 | orchestrator | + block_device { 2026-01-05 00:02:24.966007 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.966027 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.966032 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.966035 | orchestrator | + multiattach = false 2026-01-05 00:02:24.966039 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.966043 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966047 | orchestrator | } 2026-01-05 00:02:24.966050 | orchestrator | 2026-01-05 00:02:24.966054 | orchestrator | + network { 2026-01-05 00:02:24.966058 | orchestrator | + access_network = false 2026-01-05 00:02:24.966062 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.966065 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.966069 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.966073 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.966077 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.966080 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966084 | orchestrator | } 2026-01-05 00:02:24.966088 | orchestrator | } 2026-01-05 00:02:24.966095 | orchestrator | 2026-01-05 00:02:24.966098 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-05 00:02:24.966102 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:24.966106 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.966110 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.966114 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.966118 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.966121 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.966125 | orchestrator | + config_drive = true 2026-01-05 00:02:24.966129 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.966132 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.966136 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:24.966140 | orchestrator | + force_delete = false 2026-01-05 00:02:24.966144 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.966148 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966151 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.966158 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.966162 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.966165 | orchestrator | + name = "testbed-node-2" 2026-01-05 00:02:24.966169 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.966173 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966177 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.966180 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.966184 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.966188 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:24.966192 | orchestrator | 2026-01-05 00:02:24.966196 | orchestrator | + block_device { 2026-01-05 00:02:24.966199 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.966203 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.966207 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.966211 | orchestrator | + multiattach = false 2026-01-05 00:02:24.966214 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.966218 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966222 | orchestrator | } 2026-01-05 00:02:24.966226 | orchestrator | 2026-01-05 00:02:24.966229 | orchestrator | + network { 2026-01-05 00:02:24.966233 | orchestrator | + access_network = false 2026-01-05 00:02:24.966237 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.966240 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.966244 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.966248 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.966252 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.966255 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966259 | orchestrator | } 2026-01-05 00:02:24.966263 | orchestrator | } 2026-01-05 00:02:24.966269 | orchestrator | 2026-01-05 00:02:24.966279 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-05 00:02:24.966283 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:24.966286 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.966290 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.966294 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.966298 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.966301 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.966305 | orchestrator | + config_drive = true 2026-01-05 00:02:24.966309 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.966312 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.966316 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:24.966320 | orchestrator | + force_delete = false 2026-01-05 00:02:24.966324 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.966327 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966331 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.966335 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.966339 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.966342 | orchestrator | + name = "testbed-node-3" 2026-01-05 00:02:24.966346 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.966350 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966354 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.966357 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.966361 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.966365 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:24.966369 | orchestrator | 2026-01-05 00:02:24.966372 | orchestrator | + block_device { 2026-01-05 00:02:24.966376 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.966380 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.966384 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.966391 | orchestrator | + multiattach = false 2026-01-05 00:02:24.966394 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.966398 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966402 | orchestrator | } 2026-01-05 00:02:24.966406 | orchestrator | 2026-01-05 00:02:24.966409 | orchestrator | + network { 2026-01-05 00:02:24.966413 | orchestrator | + access_network = false 2026-01-05 00:02:24.966417 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.966421 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.966424 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.966428 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.966432 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.966435 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966439 | orchestrator | } 2026-01-05 00:02:24.966443 | orchestrator | } 2026-01-05 00:02:24.966449 | orchestrator | 2026-01-05 00:02:24.966453 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-05 00:02:24.966457 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:24.966460 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.966464 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.966468 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.966472 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.966475 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.966479 | orchestrator | + config_drive = true 2026-01-05 00:02:24.966483 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.966486 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.966490 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:24.966494 | orchestrator | + force_delete = false 2026-01-05 00:02:24.966498 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.966501 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966505 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.966509 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.966512 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.966516 | orchestrator | + name = "testbed-node-4" 2026-01-05 00:02:24.966520 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.966523 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966527 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.966531 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.966535 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.966538 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:24.966542 | orchestrator | 2026-01-05 00:02:24.966546 | orchestrator | + block_device { 2026-01-05 00:02:24.966550 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.966554 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.966557 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.966561 | orchestrator | + multiattach = false 2026-01-05 00:02:24.966565 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.966568 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966572 | orchestrator | } 2026-01-05 00:02:24.966576 | orchestrator | 2026-01-05 00:02:24.966580 | orchestrator | + network { 2026-01-05 00:02:24.966583 | orchestrator | + access_network = false 2026-01-05 00:02:24.966587 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.966591 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.966594 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.966598 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.966602 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.966606 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966609 | orchestrator | } 2026-01-05 00:02:24.966613 | orchestrator | } 2026-01-05 00:02:24.966623 | orchestrator | 2026-01-05 00:02:24.966627 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-05 00:02:24.966630 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:24.966634 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:24.966638 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:24.966642 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:24.966645 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.966649 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:24.966653 | orchestrator | + config_drive = true 2026-01-05 00:02:24.966657 | orchestrator | + created = (known after apply) 2026-01-05 00:02:24.966660 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:24.966664 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:24.966668 | orchestrator | + force_delete = false 2026-01-05 00:02:24.966671 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:24.966675 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966679 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:24.966683 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:24.966686 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:24.966690 | orchestrator | + name = "testbed-node-5" 2026-01-05 00:02:24.966694 | orchestrator | + power_state = "active" 2026-01-05 00:02:24.966697 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966701 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:24.966705 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:24.966709 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:24.966712 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:24.966716 | orchestrator | 2026-01-05 00:02:24.966720 | orchestrator | + block_device { 2026-01-05 00:02:24.966724 | orchestrator | + boot_index = 0 2026-01-05 00:02:24.966727 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:24.966731 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:24.966735 | orchestrator | + multiattach = false 2026-01-05 00:02:24.966738 | orchestrator | + source_type = "volume" 2026-01-05 00:02:24.966742 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966746 | orchestrator | } 2026-01-05 00:02:24.966750 | orchestrator | 2026-01-05 00:02:24.966753 | orchestrator | + network { 2026-01-05 00:02:24.966757 | orchestrator | + access_network = false 2026-01-05 00:02:24.966761 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:24.966765 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:24.966769 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:24.966772 | orchestrator | + name = (known after apply) 2026-01-05 00:02:24.966776 | orchestrator | + port = (known after apply) 2026-01-05 00:02:24.966780 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:24.966784 | orchestrator | } 2026-01-05 00:02:24.966788 | orchestrator | } 2026-01-05 00:02:24.966793 | orchestrator | 2026-01-05 00:02:24.966797 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-05 00:02:24.966801 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-05 00:02:24.966805 | orchestrator | + fingerprint = (known after apply) 2026-01-05 00:02:24.966808 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966812 | orchestrator | + name = "testbed" 2026-01-05 00:02:24.966816 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:24.966819 | orchestrator | + public_key = (known after apply) 2026-01-05 00:02:24.966823 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966827 | orchestrator | + user_id = (known after apply) 2026-01-05 00:02:24.966831 | orchestrator | } 2026-01-05 00:02:24.966834 | orchestrator | 2026-01-05 00:02:24.966838 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-05 00:02:24.966842 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.966849 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.966853 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966868 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.966872 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966878 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.966882 | orchestrator | } 2026-01-05 00:02:24.966886 | orchestrator | 2026-01-05 00:02:24.966890 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-05 00:02:24.966894 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.966897 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.966901 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966905 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.966909 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966912 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.966916 | orchestrator | } 2026-01-05 00:02:24.966920 | orchestrator | 2026-01-05 00:02:24.966924 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-05 00:02:24.966927 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.966931 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.966935 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966939 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.966942 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966946 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.966950 | orchestrator | } 2026-01-05 00:02:24.966956 | orchestrator | 2026-01-05 00:02:24.966960 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-05 00:02:24.966964 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.966967 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.966971 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.966975 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.966979 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.966982 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.966986 | orchestrator | } 2026-01-05 00:02:24.966990 | orchestrator | 2026-01-05 00:02:24.966994 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-05 00:02:24.966997 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.967001 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.967005 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967009 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.967012 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967016 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.967020 | orchestrator | } 2026-01-05 00:02:24.967024 | orchestrator | 2026-01-05 00:02:24.967027 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-05 00:02:24.967031 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.967035 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.967039 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967042 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.967046 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967050 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.967053 | orchestrator | } 2026-01-05 00:02:24.967057 | orchestrator | 2026-01-05 00:02:24.967061 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-05 00:02:24.967065 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.967068 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.967072 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967076 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.967080 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967086 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.967090 | orchestrator | } 2026-01-05 00:02:24.967094 | orchestrator | 2026-01-05 00:02:24.967098 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-05 00:02:24.967102 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.967105 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.967109 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967113 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.967117 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967120 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.967124 | orchestrator | } 2026-01-05 00:02:24.967128 | orchestrator | 2026-01-05 00:02:24.967131 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-05 00:02:24.967135 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:24.967139 | orchestrator | + device = (known after apply) 2026-01-05 00:02:24.967143 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967146 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:24.967150 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967154 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:24.967158 | orchestrator | } 2026-01-05 00:02:24.967164 | orchestrator | 2026-01-05 00:02:24.967168 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-05 00:02:24.967172 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-05 00:02:24.967176 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:24.967180 | orchestrator | + floating_ip = (known after apply) 2026-01-05 00:02:24.967184 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967187 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:24.967191 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967195 | orchestrator | } 2026-01-05 00:02:24.967198 | orchestrator | 2026-01-05 00:02:24.967202 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-05 00:02:24.967206 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-05 00:02:24.967210 | orchestrator | + address = (known after apply) 2026-01-05 00:02:24.967213 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967219 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:24.967223 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.967227 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:24.967231 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967235 | orchestrator | + pool = "public" 2026-01-05 00:02:24.967239 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:24.967242 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967246 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.967250 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967254 | orchestrator | } 2026-01-05 00:02:24.967257 | orchestrator | 2026-01-05 00:02:24.967261 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-05 00:02:24.967265 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-05 00:02:24.967269 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.967272 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967276 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:24.967280 | orchestrator | + "nova", 2026-01-05 00:02:24.967284 | orchestrator | ] 2026-01-05 00:02:24.967287 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:24.967291 | orchestrator | + external = (known after apply) 2026-01-05 00:02:24.967295 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967299 | orchestrator | + mtu = (known after apply) 2026-01-05 00:02:24.967302 | orchestrator | + name = "net-testbed-management" 2026-01-05 00:02:24.967306 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.967313 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.967317 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967320 | orchestrator | + shared = (known after apply) 2026-01-05 00:02:24.967324 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967328 | orchestrator | + transparent_vlan = (known after apply) 2026-01-05 00:02:24.967332 | orchestrator | 2026-01-05 00:02:24.967335 | orchestrator | + segments (known after apply) 2026-01-05 00:02:24.967339 | orchestrator | } 2026-01-05 00:02:24.967346 | orchestrator | 2026-01-05 00:02:24.967349 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-05 00:02:24.967353 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-05 00:02:24.967357 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.967361 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.967364 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.967368 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967372 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.967376 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.967379 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.967383 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.967387 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967390 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.967394 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.967398 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.967402 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.967405 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967409 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.967413 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967417 | orchestrator | 2026-01-05 00:02:24.967420 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967424 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.967428 | orchestrator | } 2026-01-05 00:02:24.967432 | orchestrator | 2026-01-05 00:02:24.967435 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.967439 | orchestrator | 2026-01-05 00:02:24.967443 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.967447 | orchestrator | + ip_address = "192.168.16.5" 2026-01-05 00:02:24.967450 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.967454 | orchestrator | } 2026-01-05 00:02:24.967458 | orchestrator | } 2026-01-05 00:02:24.967462 | orchestrator | 2026-01-05 00:02:24.967465 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-05 00:02:24.967469 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:24.967473 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.967477 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.967480 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.967484 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967488 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.967491 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.967495 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.967499 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.967503 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967506 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.967510 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.967514 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.967517 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.967521 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967528 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.967531 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967535 | orchestrator | 2026-01-05 00:02:24.967539 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967543 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:24.967546 | orchestrator | } 2026-01-05 00:02:24.967550 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967554 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.967558 | orchestrator | } 2026-01-05 00:02:24.967561 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967565 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:24.967569 | orchestrator | } 2026-01-05 00:02:24.967573 | orchestrator | 2026-01-05 00:02:24.967576 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.967580 | orchestrator | 2026-01-05 00:02:24.967584 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.967588 | orchestrator | + ip_address = "192.168.16.10" 2026-01-05 00:02:24.967591 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.967595 | orchestrator | } 2026-01-05 00:02:24.967599 | orchestrator | } 2026-01-05 00:02:24.967605 | orchestrator | 2026-01-05 00:02:24.967609 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-05 00:02:24.967613 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:24.967621 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.967625 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.967629 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.967632 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967636 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.967640 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.967644 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.967647 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.967651 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967655 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.967658 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.967662 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.967666 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.967670 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967673 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.967677 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967681 | orchestrator | 2026-01-05 00:02:24.967685 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967688 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:24.967692 | orchestrator | } 2026-01-05 00:02:24.967696 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967699 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.967703 | orchestrator | } 2026-01-05 00:02:24.967707 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967711 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:24.967714 | orchestrator | } 2026-01-05 00:02:24.967718 | orchestrator | 2026-01-05 00:02:24.967722 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.967726 | orchestrator | 2026-01-05 00:02:24.967729 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.967733 | orchestrator | + ip_address = "192.168.16.11" 2026-01-05 00:02:24.967737 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.967741 | orchestrator | } 2026-01-05 00:02:24.967744 | orchestrator | } 2026-01-05 00:02:24.967748 | orchestrator | 2026-01-05 00:02:24.967752 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-05 00:02:24.967756 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:24.967760 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.967763 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.967767 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.967771 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967777 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.967781 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.967785 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.967788 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.967792 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967796 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.967800 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.967803 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.967807 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.967811 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967814 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.967818 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967822 | orchestrator | 2026-01-05 00:02:24.967826 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967829 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:24.967833 | orchestrator | } 2026-01-05 00:02:24.967837 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967841 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.967844 | orchestrator | } 2026-01-05 00:02:24.967848 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967852 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:24.967856 | orchestrator | } 2026-01-05 00:02:24.967868 | orchestrator | 2026-01-05 00:02:24.967872 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.967876 | orchestrator | 2026-01-05 00:02:24.967879 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.967883 | orchestrator | + ip_address = "192.168.16.12" 2026-01-05 00:02:24.967887 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.967891 | orchestrator | } 2026-01-05 00:02:24.967894 | orchestrator | } 2026-01-05 00:02:24.967901 | orchestrator | 2026-01-05 00:02:24.967905 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-05 00:02:24.967909 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:24.967913 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.967916 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.967920 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.967924 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.967928 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.967931 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.967935 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.967939 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.967943 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.967946 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.967950 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.967954 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.967958 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.967961 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.967965 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.967969 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.967973 | orchestrator | 2026-01-05 00:02:24.967976 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967980 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:24.967984 | orchestrator | } 2026-01-05 00:02:24.967988 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.967991 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.967995 | orchestrator | } 2026-01-05 00:02:24.967999 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968003 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:24.968006 | orchestrator | } 2026-01-05 00:02:24.968010 | orchestrator | 2026-01-05 00:02:24.968017 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.968021 | orchestrator | 2026-01-05 00:02:24.968024 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.968028 | orchestrator | + ip_address = "192.168.16.13" 2026-01-05 00:02:24.968032 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.968036 | orchestrator | } 2026-01-05 00:02:24.968039 | orchestrator | } 2026-01-05 00:02:24.968043 | orchestrator | 2026-01-05 00:02:24.968047 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-05 00:02:24.968051 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:24.968054 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.968058 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.968062 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.968066 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.968069 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.968073 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.968077 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.968080 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.968087 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968091 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.968095 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.968098 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.968102 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.968106 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968110 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.968113 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968118 | orchestrator | 2026-01-05 00:02:24.968121 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968127 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:24.968131 | orchestrator | } 2026-01-05 00:02:24.968135 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968139 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.968142 | orchestrator | } 2026-01-05 00:02:24.968146 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968150 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:24.968154 | orchestrator | } 2026-01-05 00:02:24.968157 | orchestrator | 2026-01-05 00:02:24.968161 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.968165 | orchestrator | 2026-01-05 00:02:24.968169 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.968173 | orchestrator | + ip_address = "192.168.16.14" 2026-01-05 00:02:24.968176 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.968180 | orchestrator | } 2026-01-05 00:02:24.968184 | orchestrator | } 2026-01-05 00:02:24.968192 | orchestrator | 2026-01-05 00:02:24.968195 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-05 00:02:24.968199 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:24.968203 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.968207 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:24.968210 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:24.968214 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.968218 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:24.968222 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:24.968225 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:24.968229 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:24.968233 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968237 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:24.968240 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.968244 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:24.968248 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:24.968255 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968259 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:24.968262 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968266 | orchestrator | 2026-01-05 00:02:24.968270 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968274 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:24.968277 | orchestrator | } 2026-01-05 00:02:24.968281 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968285 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:24.968289 | orchestrator | } 2026-01-05 00:02:24.968292 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:24.968296 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:24.968300 | orchestrator | } 2026-01-05 00:02:24.968304 | orchestrator | 2026-01-05 00:02:24.968307 | orchestrator | + binding (known after apply) 2026-01-05 00:02:24.968311 | orchestrator | 2026-01-05 00:02:24.968315 | orchestrator | + fixed_ip { 2026-01-05 00:02:24.968319 | orchestrator | + ip_address = "192.168.16.15" 2026-01-05 00:02:24.968322 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.968326 | orchestrator | } 2026-01-05 00:02:24.968330 | orchestrator | } 2026-01-05 00:02:24.968334 | orchestrator | 2026-01-05 00:02:24.968337 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-05 00:02:24.968341 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-05 00:02:24.968345 | orchestrator | + force_destroy = false 2026-01-05 00:02:24.968349 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968352 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:24.968356 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968360 | orchestrator | + router_id = (known after apply) 2026-01-05 00:02:24.968364 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:24.968367 | orchestrator | } 2026-01-05 00:02:24.968371 | orchestrator | 2026-01-05 00:02:24.968375 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-05 00:02:24.968379 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-05 00:02:24.968382 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:24.968386 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.968390 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:24.968394 | orchestrator | + "nova", 2026-01-05 00:02:24.968397 | orchestrator | ] 2026-01-05 00:02:24.968401 | orchestrator | + distributed = (known after apply) 2026-01-05 00:02:24.968405 | orchestrator | + enable_snat = (known after apply) 2026-01-05 00:02:24.968409 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-05 00:02:24.968412 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-05 00:02:24.968416 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968420 | orchestrator | + name = "testbed" 2026-01-05 00:02:24.968423 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968427 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968431 | orchestrator | 2026-01-05 00:02:24.968435 | orchestrator | + external_fixed_ip (known after apply) 2026-01-05 00:02:24.968438 | orchestrator | } 2026-01-05 00:02:24.968442 | orchestrator | 2026-01-05 00:02:24.968446 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-05 00:02:24.968451 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-05 00:02:24.968454 | orchestrator | + description = "ssh" 2026-01-05 00:02:24.968458 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968462 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968465 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968469 | orchestrator | + port_range_max = 22 2026-01-05 00:02:24.968473 | orchestrator | + port_range_min = 22 2026-01-05 00:02:24.968477 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:24.968481 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968488 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968492 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968496 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968500 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968504 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968507 | orchestrator | } 2026-01-05 00:02:24.968511 | orchestrator | 2026-01-05 00:02:24.968519 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-05 00:02:24.968523 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-05 00:02:24.968527 | orchestrator | + description = "wireguard" 2026-01-05 00:02:24.968530 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968534 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968538 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968541 | orchestrator | + port_range_max = 51820 2026-01-05 00:02:24.968545 | orchestrator | + port_range_min = 51820 2026-01-05 00:02:24.968549 | orchestrator | + protocol = "udp" 2026-01-05 00:02:24.968553 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968556 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968560 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968564 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968568 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968571 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968575 | orchestrator | } 2026-01-05 00:02:24.968579 | orchestrator | 2026-01-05 00:02:24.968583 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-05 00:02:24.968587 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-05 00:02:24.968593 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968597 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968601 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968605 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:24.968608 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968612 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968616 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968620 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:24.968623 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968627 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968631 | orchestrator | } 2026-01-05 00:02:24.968635 | orchestrator | 2026-01-05 00:02:24.968638 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-05 00:02:24.968642 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-05 00:02:24.968646 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968650 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968653 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968657 | orchestrator | + protocol = "udp" 2026-01-05 00:02:24.968661 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968665 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968668 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968672 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:24.968676 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968680 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968683 | orchestrator | } 2026-01-05 00:02:24.968687 | orchestrator | 2026-01-05 00:02:24.968691 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-05 00:02:24.968698 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-05 00:02:24.968701 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968705 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968709 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968713 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:24.968716 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968720 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968724 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968728 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968731 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968735 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968739 | orchestrator | } 2026-01-05 00:02:24.968743 | orchestrator | 2026-01-05 00:02:24.968746 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-05 00:02:24.968750 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-05 00:02:24.968754 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968758 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968761 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968765 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:24.968769 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968773 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968776 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968780 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968784 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968787 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968791 | orchestrator | } 2026-01-05 00:02:24.968795 | orchestrator | 2026-01-05 00:02:24.968799 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-05 00:02:24.968802 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-05 00:02:24.968806 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968810 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968814 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968817 | orchestrator | + protocol = "udp" 2026-01-05 00:02:24.968821 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968825 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968829 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968832 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968843 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968847 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968851 | orchestrator | } 2026-01-05 00:02:24.968855 | orchestrator | 2026-01-05 00:02:24.968884 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-05 00:02:24.968888 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-05 00:02:24.968892 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968895 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968899 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968903 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:24.968907 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968911 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968914 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968918 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968922 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968926 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968932 | orchestrator | } 2026-01-05 00:02:24.968936 | orchestrator | 2026-01-05 00:02:24.968940 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-05 00:02:24.968944 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-05 00:02:24.968948 | orchestrator | + description = "vrrp" 2026-01-05 00:02:24.968951 | orchestrator | + direction = "ingress" 2026-01-05 00:02:24.968955 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:24.968959 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.968963 | orchestrator | + protocol = "112" 2026-01-05 00:02:24.968966 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.968970 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:24.968974 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:24.968978 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:24.968982 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:24.968985 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.968989 | orchestrator | } 2026-01-05 00:02:24.968993 | orchestrator | 2026-01-05 00:02:24.968997 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-05 00:02:24.969001 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-05 00:02:24.969004 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.969008 | orchestrator | + description = "management security group" 2026-01-05 00:02:24.969012 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.969016 | orchestrator | + name = "testbed-management" 2026-01-05 00:02:24.969019 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.969023 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:24.969027 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.969031 | orchestrator | } 2026-01-05 00:02:24.969034 | orchestrator | 2026-01-05 00:02:24.969038 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-05 00:02:24.969042 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-05 00:02:24.969046 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.969049 | orchestrator | + description = "node security group" 2026-01-05 00:02:24.969053 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.969057 | orchestrator | + name = "testbed-node" 2026-01-05 00:02:24.969061 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.969064 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:24.969068 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.969072 | orchestrator | } 2026-01-05 00:02:24.969076 | orchestrator | 2026-01-05 00:02:24.969079 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-05 00:02:24.969083 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-05 00:02:24.969087 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:24.969091 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-05 00:02:24.969094 | orchestrator | + dns_nameservers = [ 2026-01-05 00:02:24.969098 | orchestrator | + "8.8.8.8", 2026-01-05 00:02:24.969102 | orchestrator | + "9.9.9.9", 2026-01-05 00:02:24.969106 | orchestrator | ] 2026-01-05 00:02:24.969110 | orchestrator | + enable_dhcp = true 2026-01-05 00:02:24.969114 | orchestrator | + gateway_ip = (known after apply) 2026-01-05 00:02:24.969120 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.969124 | orchestrator | + ip_version = 4 2026-01-05 00:02:24.969128 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-05 00:02:24.969131 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-05 00:02:24.969135 | orchestrator | + name = "subnet-testbed-management" 2026-01-05 00:02:24.969139 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:24.969143 | orchestrator | + no_gateway = false 2026-01-05 00:02:24.969147 | orchestrator | + region = (known after apply) 2026-01-05 00:02:24.969150 | orchestrator | + service_types = (known after apply) 2026-01-05 00:02:24.969157 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:24.969161 | orchestrator | 2026-01-05 00:02:24.969165 | orchestrator | + allocation_pool { 2026-01-05 00:02:24.969168 | orchestrator | + end = "192.168.31.250" 2026-01-05 00:02:24.969172 | orchestrator | + start = "192.168.31.200" 2026-01-05 00:02:24.969176 | orchestrator | } 2026-01-05 00:02:24.969180 | orchestrator | } 2026-01-05 00:02:24.969184 | orchestrator | 2026-01-05 00:02:24.969187 | orchestrator | # terraform_data.image will be created 2026-01-05 00:02:24.969191 | orchestrator | + resource "terraform_data" "image" { 2026-01-05 00:02:24.969195 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.969199 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:24.969202 | orchestrator | + output = (known after apply) 2026-01-05 00:02:24.969206 | orchestrator | } 2026-01-05 00:02:24.969210 | orchestrator | 2026-01-05 00:02:24.969214 | orchestrator | # terraform_data.image_node will be created 2026-01-05 00:02:24.969217 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-05 00:02:24.969221 | orchestrator | + id = (known after apply) 2026-01-05 00:02:24.969225 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:24.969229 | orchestrator | + output = (known after apply) 2026-01-05 00:02:24.969232 | orchestrator | } 2026-01-05 00:02:24.969236 | orchestrator | 2026-01-05 00:02:24.969240 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-05 00:02:24.969244 | orchestrator | 2026-01-05 00:02:24.969247 | orchestrator | Changes to Outputs: 2026-01-05 00:02:24.969251 | orchestrator | + manager_address = (sensitive value) 2026-01-05 00:02:24.969255 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:25.066114 | orchestrator | terraform_data.image_node: Creating... 2026-01-05 00:02:25.066576 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=93d3be98-003a-090b-eb99-f942bf48e81a] 2026-01-05 00:02:25.193819 | orchestrator | terraform_data.image: Creating... 2026-01-05 00:02:25.195526 | orchestrator | terraform_data.image: Creation complete after 0s [id=783c8349-0ad7-193a-203b-4d060f0caf90] 2026-01-05 00:02:25.221465 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-05 00:02:25.231811 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-05 00:02:25.233930 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-05 00:02:25.237946 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-05 00:02:25.238043 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-05 00:02:25.238162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-05 00:02:25.239218 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-05 00:02:25.272084 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-05 00:02:25.272153 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-05 00:02:25.273102 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-05 00:02:25.682524 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:25.687233 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-05 00:02:25.780702 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-05 00:02:25.790093 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-05 00:02:26.290516 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=6b2583b3-d436-4506-baa2-fb110c26a66a] 2026-01-05 00:02:26.294638 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-05 00:02:26.343150 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:26.360355 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-05 00:02:28.857494 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=79f451b0-665e-4ae6-bc28-e4c9d18e1f8d] 2026-01-05 00:02:28.873895 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-05 00:02:28.878273 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=89c52e93947689fe686410ff3b473956bc7e9f5c] 2026-01-05 00:02:28.886798 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=faa0d012-340f-4cbd-a064-876345a11d6a] 2026-01-05 00:02:28.890219 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-05 00:02:28.895146 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-05 00:02:28.895189 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=309578a2156787a5b091eb4e638b95f93a5bb1f2] 2026-01-05 00:02:28.909755 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-05 00:02:28.918641 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=423e4112-2158-480f-994d-106730fe425c] 2026-01-05 00:02:28.927104 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=165d58d7-2860-4843-bbd3-8318e20b6051] 2026-01-05 00:02:28.933460 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-05 00:02:28.935818 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-05 00:02:28.944333 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=23055056-069f-450b-aeeb-5eb50c3216da] 2026-01-05 00:02:28.963794 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=177f10be-5bcc-4fc5-a906-9c9dfc4c0725] 2026-01-05 00:02:28.978812 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=bd2b6514-9bcf-45c0-8865-be606d512acf] 2026-01-05 00:02:28.986156 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-05 00:02:28.992709 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-05 00:02:29.004852 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-05 00:02:29.006950 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=a447ecf7-81d3-4a74-8944-683d4141cf1b] 2026-01-05 00:02:29.019230 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=40600621-aef8-490d-8855-2a618a83589e] 2026-01-05 00:02:29.717097 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=3e861e97-55e8-471e-a69f-5dd2ebc3cf79] 2026-01-05 00:02:29.862881 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=b1cbe895-8099-4ffc-9627-216c01b47c28] 2026-01-05 00:02:29.867911 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-05 00:02:32.339581 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d9814992-acb0-4fb6-b869-372bf4d7de3f] 2026-01-05 00:02:32.351671 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=9600cb02-fd9e-4a41-92d8-08e734250305] 2026-01-05 00:02:35.996524 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=34fdbb66-233c-4628-9399-a3b3dd90abc2] 2026-01-05 00:02:35.996609 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=9af08ba0-0250-48f3-ad13-298a6ecbf4d6] 2026-01-05 00:02:35.996625 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=5b0a8530-6c77-4769-a703-fe762948c9fb] 2026-01-05 00:02:35.996637 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=f65865d2-fa4a-4078-a136-ae0091ff8f64] 2026-01-05 00:02:35.996651 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=a33bd4b9-4e09-464a-add8-b5dda68b375e] 2026-01-05 00:02:35.996664 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-05 00:02:35.996676 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-05 00:02:35.996687 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-05 00:02:35.996699 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=2c534255-ef8b-4b42-9014-54a513de4304] 2026-01-05 00:02:35.996711 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-05 00:02:35.996722 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-05 00:02:35.996733 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-05 00:02:35.996775 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-05 00:02:35.996787 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-05 00:02:35.996798 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-05 00:02:35.996810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=2b5d2b1d-1f1a-4bfe-8d84-4f468871ab90] 2026-01-05 00:02:35.996836 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=0b95fc07-5a37-4328-bfc5-c5a5460a828c] 2026-01-05 00:02:35.996888 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=a3175354-9aab-40e2-aaa9-71daabf9c15f] 2026-01-05 00:02:35.996903 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-05 00:02:35.996914 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-05 00:02:35.996925 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-05 00:02:35.996936 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-05 00:02:35.996947 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-05 00:02:35.996961 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=a4085940-ce44-4b6e-93ce-52b2caa58ff1] 2026-01-05 00:02:35.996972 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-05 00:02:35.996983 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=8208e780-366f-4a29-a65c-286317cf0598] 2026-01-05 00:02:35.996994 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-05 00:02:35.997006 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=382bff08-51c5-4e33-85de-61f10bf90b4f] 2026-01-05 00:02:35.997016 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-05 00:02:35.997027 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=7eff5e2e-cd88-4040-96c5-b70190bbb006] 2026-01-05 00:02:35.997038 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-05 00:02:35.997049 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=d439d5d6-751a-4fed-9025-177da713141a] 2026-01-05 00:02:35.997060 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-05 00:02:35.997071 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=65f2cb5b-55a7-41c7-bb73-3cd28cae7c1a] 2026-01-05 00:02:35.997082 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=569af681-f34d-43ea-8e76-83469f34ad02] 2026-01-05 00:02:35.997093 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=0c20f93f-28f0-4268-a3dd-1523bc74216f] 2026-01-05 00:02:35.997103 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=c8a60872-d094-445b-8ff6-8aad83335914] 2026-01-05 00:02:35.997114 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=be6a732d-4084-430a-b997-cdc391f9361d] 2026-01-05 00:02:35.997125 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d364b1e1-e906-4dd8-83b9-7f9597dbcefc] 2026-01-05 00:02:35.997136 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=5b88013c-8fcd-48a3-bf02-6f20a5d15abf] 2026-01-05 00:02:35.997160 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=9faaf55f-ec2e-4af1-8545-13e57edf514a] 2026-01-05 00:02:35.997172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=54b5e37f-2729-446d-a2e8-d8f8dfb2a222] 2026-01-05 00:02:38.287218 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=cca5d7e6-b465-4cdb-bb48-e4413dfb35a6] 2026-01-05 00:02:39.115988 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-05 00:02:39.116093 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-05 00:02:39.116106 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-05 00:02:39.116116 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-05 00:02:39.116125 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-05 00:02:39.116134 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-05 00:02:39.116166 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-05 00:02:40.164012 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=564276cf-65f9-41cc-bfb6-834bbb013889] 2026-01-05 00:02:40.178113 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-05 00:02:40.181801 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-05 00:02:40.183245 | orchestrator | local_file.inventory: Creating... 2026-01-05 00:02:40.186550 | orchestrator | local_file.inventory: Creation complete after 0s [id=6f219868f8d211084369a73e9285ad77764790f6] 2026-01-05 00:02:40.186631 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ff37b475084760d4e5b7f62427afbcefcde9910b] 2026-01-05 00:02:41.076491 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=564276cf-65f9-41cc-bfb6-834bbb013889] 2026-01-05 00:02:48.333743 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-05 00:02:48.333913 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-05 00:02:48.333932 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-05 00:02:48.343092 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-05 00:02:48.350592 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-05 00:02:48.350677 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-05 00:02:58.342157 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-05 00:02:58.342282 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-05 00:02:58.342295 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-05 00:02:58.343150 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-05 00:02:58.351580 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-05 00:02:58.351631 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-05 00:02:59.174107 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=42aff70d-b938-4b40-b674-ccd541e04707] 2026-01-05 00:03:08.342599 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-05 00:03:08.342749 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-05 00:03:08.343961 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-05 00:03:08.352382 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-05 00:03:08.352505 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-05 00:03:09.108319 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=27c9a8cb-278f-40e0-b587-fcb3d6a8f7a4] 2026-01-05 00:03:09.206320 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=fabaebe5-0af7-4415-8fe9-fd617ce5444a] 2026-01-05 00:03:09.349474 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=4b5b470c-aaa9-4f27-b64a-6db9f3600649] 2026-01-05 00:03:09.563163 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=92945625-a046-442e-a276-b4f1838fe5d9] 2026-01-05 00:03:09.755680 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=5544ba5c-073b-4340-9a85-c672cc019498] 2026-01-05 00:03:09.773620 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-05 00:03:09.782909 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8334474375452220088] 2026-01-05 00:03:09.790535 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-05 00:03:09.795796 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-05 00:03:09.807954 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-05 00:03:09.811943 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-05 00:03:09.812442 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-05 00:03:09.816088 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-05 00:03:09.818541 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-05 00:03:09.820612 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-05 00:03:09.821304 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-05 00:03:09.843880 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-05 00:03:13.278921 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=42aff70d-b938-4b40-b674-ccd541e04707/177f10be-5bcc-4fc5-a906-9c9dfc4c0725] 2026-01-05 00:03:13.292386 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=27c9a8cb-278f-40e0-b587-fcb3d6a8f7a4/a447ecf7-81d3-4a74-8944-683d4141cf1b] 2026-01-05 00:03:13.307634 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=92945625-a046-442e-a276-b4f1838fe5d9/165d58d7-2860-4843-bbd3-8318e20b6051] 2026-01-05 00:03:13.336036 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=42aff70d-b938-4b40-b674-ccd541e04707/423e4112-2158-480f-994d-106730fe425c] 2026-01-05 00:03:13.374754 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=92945625-a046-442e-a276-b4f1838fe5d9/79f451b0-665e-4ae6-bc28-e4c9d18e1f8d] 2026-01-05 00:03:13.392261 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=27c9a8cb-278f-40e0-b587-fcb3d6a8f7a4/bd2b6514-9bcf-45c0-8865-be606d512acf] 2026-01-05 00:03:19.553309 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=27c9a8cb-278f-40e0-b587-fcb3d6a8f7a4/23055056-069f-450b-aeeb-5eb50c3216da] 2026-01-05 00:03:19.553692 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=92945625-a046-442e-a276-b4f1838fe5d9/faa0d012-340f-4cbd-a064-876345a11d6a] 2026-01-05 00:03:19.582249 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=42aff70d-b938-4b40-b674-ccd541e04707/40600621-aef8-490d-8855-2a618a83589e] 2026-01-05 00:03:19.817968 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-05 00:03:29.827221 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-05 00:03:30.310405 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=edd78d58-bfea-42d2-bf94-104fa18a6539] 2026-01-05 00:03:30.328356 | orchestrator | 2026-01-05 00:03:30.328460 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-05 00:03:30.328468 | orchestrator | 2026-01-05 00:03:30.328472 | orchestrator | Outputs: 2026-01-05 00:03:30.328478 | orchestrator | 2026-01-05 00:03:30.328503 | orchestrator | manager_address = 2026-01-05 00:03:30.328509 | orchestrator | private_key = 2026-01-05 00:03:30.661038 | orchestrator | ok: Runtime: 0:01:13.185396 2026-01-05 00:03:30.693463 | 2026-01-05 00:03:30.693615 | TASK [Create infrastructure (stable)] 2026-01-05 00:03:31.228115 | orchestrator | skipping: Conditional result was False 2026-01-05 00:03:31.245844 | 2026-01-05 00:03:31.246009 | TASK [Fetch manager address] 2026-01-05 00:03:31.759588 | orchestrator | ok 2026-01-05 00:03:31.768746 | 2026-01-05 00:03:31.768893 | TASK [Set manager_host address] 2026-01-05 00:03:31.843546 | orchestrator | ok 2026-01-05 00:03:31.851094 | 2026-01-05 00:03:31.851223 | LOOP [Update ansible collections] 2026-01-05 00:03:33.443899 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:33.444273 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:33.444330 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:33.444366 | orchestrator | Process install dependency map 2026-01-05 00:03:33.444448 | orchestrator | Starting collection install process 2026-01-05 00:03:33.444478 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-01-05 00:03:33.444513 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-01-05 00:03:33.444558 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-05 00:03:33.444631 | orchestrator | ok: Item: commons Runtime: 0:00:01.168092 2026-01-05 00:03:34.802174 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:34.802348 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:34.802423 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:34.802463 | orchestrator | Process install dependency map 2026-01-05 00:03:34.802578 | orchestrator | Starting collection install process 2026-01-05 00:03:34.802620 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-01-05 00:03:34.802654 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-01-05 00:03:34.802687 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-05 00:03:34.802742 | orchestrator | ok: Item: services Runtime: 0:00:01.054366 2026-01-05 00:03:34.822495 | 2026-01-05 00:03:34.822670 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:03:45.472450 | orchestrator | ok 2026-01-05 00:03:45.487994 | 2026-01-05 00:03:45.488135 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:04:45.542788 | orchestrator | ok 2026-01-05 00:04:45.554436 | 2026-01-05 00:04:45.554651 | TASK [Fetch manager ssh hostkey] 2026-01-05 00:04:47.141132 | orchestrator | Output suppressed because no_log was given 2026-01-05 00:04:47.148730 | 2026-01-05 00:04:47.148861 | TASK [Get ssh keypair from terraform environment] 2026-01-05 00:04:47.685892 | orchestrator | ok: Runtime: 0:00:00.009198 2026-01-05 00:04:47.699974 | 2026-01-05 00:04:47.700135 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:04:47.739416 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-05 00:04:47.751207 | 2026-01-05 00:04:47.751406 | TASK [Run manager part 0] 2026-01-05 00:04:48.660990 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:04:48.711357 | orchestrator | 2026-01-05 00:04:48.711396 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-05 00:04:48.711403 | orchestrator | 2026-01-05 00:04:48.711415 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-05 00:04:50.847069 | orchestrator | ok: [testbed-manager] 2026-01-05 00:04:50.847179 | orchestrator | 2026-01-05 00:04:50.847241 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:04:50.847268 | orchestrator | 2026-01-05 00:04:50.847304 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:04:52.769474 | orchestrator | ok: [testbed-manager] 2026-01-05 00:04:52.769532 | orchestrator | 2026-01-05 00:04:52.769540 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:04:53.446284 | orchestrator | ok: [testbed-manager] 2026-01-05 00:04:53.446375 | orchestrator | 2026-01-05 00:04:53.446384 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:04:53.498283 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.498394 | orchestrator | 2026-01-05 00:04:53.498415 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-05 00:04:53.532931 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.533070 | orchestrator | 2026-01-05 00:04:53.533094 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:04:53.565489 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.565570 | orchestrator | 2026-01-05 00:04:53.565577 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:04:53.600426 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.600501 | orchestrator | 2026-01-05 00:04:53.600512 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:04:53.632478 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.632539 | orchestrator | 2026-01-05 00:04:53.632547 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-05 00:04:53.665369 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.665444 | orchestrator | 2026-01-05 00:04:53.665459 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-05 00:04:53.694136 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:04:53.694195 | orchestrator | 2026-01-05 00:04:53.694204 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-05 00:04:54.504223 | orchestrator | changed: [testbed-manager] 2026-01-05 00:04:54.504291 | orchestrator | 2026-01-05 00:04:54.504301 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-05 00:07:48.461509 | orchestrator | changed: [testbed-manager] 2026-01-05 00:07:48.461609 | orchestrator | 2026-01-05 00:07:48.461656 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:09:18.335119 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:18.335209 | orchestrator | 2026-01-05 00:09:18.335222 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:09:43.947642 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:43.947753 | orchestrator | 2026-01-05 00:09:43.947771 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:09:54.002784 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:54.002893 | orchestrator | 2026-01-05 00:09:54.002912 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:09:54.052174 | orchestrator | ok: [testbed-manager] 2026-01-05 00:09:54.052262 | orchestrator | 2026-01-05 00:09:54.052278 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-05 00:09:54.920592 | orchestrator | ok: [testbed-manager] 2026-01-05 00:09:54.920834 | orchestrator | 2026-01-05 00:09:54.920854 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-05 00:09:55.697291 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:55.698344 | orchestrator | 2026-01-05 00:09:55.698376 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-05 00:10:02.273807 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:02.273902 | orchestrator | 2026-01-05 00:10:02.273939 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-05 00:10:08.535081 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:08.535179 | orchestrator | 2026-01-05 00:10:08.535198 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-05 00:10:11.288516 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:11.288600 | orchestrator | 2026-01-05 00:10:11.288616 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-05 00:10:14.258670 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:14.258724 | orchestrator | 2026-01-05 00:10:14.258734 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-05 00:10:15.408934 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:15.408975 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:15.408984 | orchestrator | 2026-01-05 00:10:15.408992 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-05 00:10:15.456412 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:10:15.456718 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:10:15.456754 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:10:15.456774 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:10:18.819574 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:18.819621 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:18.819626 | orchestrator | 2026-01-05 00:10:18.819631 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-05 00:10:19.364751 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:19.364795 | orchestrator | 2026-01-05 00:10:19.364801 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-05 00:14:39.906893 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-05 00:14:39.906945 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-05 00:14:39.906954 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-05 00:14:39.906959 | orchestrator | 2026-01-05 00:14:39.906964 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-05 00:14:42.295015 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-05 00:14:42.295121 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-05 00:14:42.295137 | orchestrator | 2026-01-05 00:14:42.295151 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-05 00:14:42.295163 | orchestrator | 2026-01-05 00:14:42.295175 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:14:43.746343 | orchestrator | ok: [testbed-manager] 2026-01-05 00:14:43.746430 | orchestrator | 2026-01-05 00:14:43.746442 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:14:43.793055 | orchestrator | ok: [testbed-manager] 2026-01-05 00:14:43.793157 | orchestrator | 2026-01-05 00:14:43.793173 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:14:43.864469 | orchestrator | ok: [testbed-manager] 2026-01-05 00:14:43.864563 | orchestrator | 2026-01-05 00:14:43.864578 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:14:44.733657 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:44.733762 | orchestrator | 2026-01-05 00:14:44.733800 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:14:45.530542 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:45.530610 | orchestrator | 2026-01-05 00:14:45.530624 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:14:47.090503 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-05 00:14:47.090634 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-05 00:14:47.090659 | orchestrator | 2026-01-05 00:14:47.090703 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:14:48.593333 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:48.593399 | orchestrator | 2026-01-05 00:14:48.593407 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:14:50.510760 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:14:50.510982 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-05 00:14:50.511000 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:14:50.511012 | orchestrator | 2026-01-05 00:14:50.511025 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:14:50.584043 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:50.584137 | orchestrator | 2026-01-05 00:14:50.584157 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:14:50.669247 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:50.669299 | orchestrator | 2026-01-05 00:14:50.669309 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:14:51.255236 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:51.255277 | orchestrator | 2026-01-05 00:14:51.255283 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:14:51.326062 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:51.326114 | orchestrator | 2026-01-05 00:14:51.326124 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:14:52.232727 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:14:52.232853 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:52.232871 | orchestrator | 2026-01-05 00:14:52.232884 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:14:52.269469 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:52.269531 | orchestrator | 2026-01-05 00:14:52.269539 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:14:52.302268 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:52.302323 | orchestrator | 2026-01-05 00:14:52.302330 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:14:52.344330 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:52.344416 | orchestrator | 2026-01-05 00:14:52.344436 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:14:52.423303 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:52.423387 | orchestrator | 2026-01-05 00:14:52.423402 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:14:53.172029 | orchestrator | ok: [testbed-manager] 2026-01-05 00:14:53.172087 | orchestrator | 2026-01-05 00:14:53.172096 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:14:53.172105 | orchestrator | 2026-01-05 00:14:53.172111 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:14:54.702752 | orchestrator | ok: [testbed-manager] 2026-01-05 00:14:54.702795 | orchestrator | 2026-01-05 00:14:54.702847 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-05 00:14:55.710866 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:55.710920 | orchestrator | 2026-01-05 00:14:55.710930 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:14:55.710939 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-05 00:14:55.710947 | orchestrator | 2026-01-05 00:14:56.179713 | orchestrator | ok: Runtime: 0:10:07.788399 2026-01-05 00:14:56.198681 | 2026-01-05 00:14:56.198896 | TASK [Point out that the log in on the manager is now possible] 2026-01-05 00:14:56.239864 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-05 00:14:56.251084 | 2026-01-05 00:14:56.251236 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:14:56.290968 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-05 00:14:56.302068 | 2026-01-05 00:14:56.302221 | TASK [Run manager part 1 + 2] 2026-01-05 00:14:57.190287 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:14:57.250328 | orchestrator | 2026-01-05 00:14:57.250388 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-05 00:14:57.250395 | orchestrator | 2026-01-05 00:14:57.250410 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:15:00.294537 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:00.294594 | orchestrator | 2026-01-05 00:15:00.294621 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:15:00.342860 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:00.342939 | orchestrator | 2026-01-05 00:15:00.342954 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:15:00.395732 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:00.395795 | orchestrator | 2026-01-05 00:15:00.395805 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:15:00.449609 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:00.449673 | orchestrator | 2026-01-05 00:15:00.449685 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:15:00.525640 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:00.525709 | orchestrator | 2026-01-05 00:15:00.525720 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:15:00.599850 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:00.599919 | orchestrator | 2026-01-05 00:15:00.599931 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:15:00.654860 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-05 00:15:00.654931 | orchestrator | 2026-01-05 00:15:00.654938 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:15:01.423909 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:01.423962 | orchestrator | 2026-01-05 00:15:01.423972 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:15:01.475451 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:01.475502 | orchestrator | 2026-01-05 00:15:01.475511 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:15:02.924714 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:02.924916 | orchestrator | 2026-01-05 00:15:02.924931 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:15:03.537997 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:03.538089 | orchestrator | 2026-01-05 00:15:03.538099 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:15:04.722167 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:04.722232 | orchestrator | 2026-01-05 00:15:04.722246 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:15:20.306570 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:20.306650 | orchestrator | 2026-01-05 00:15:20.306668 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:15:21.010505 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:21.010610 | orchestrator | 2026-01-05 00:15:21.010630 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:15:21.069750 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:21.069851 | orchestrator | 2026-01-05 00:15:21.069872 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-05 00:15:22.057601 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:22.057721 | orchestrator | 2026-01-05 00:15:22.057749 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-05 00:15:23.080556 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:23.081747 | orchestrator | 2026-01-05 00:15:23.081781 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-05 00:15:23.677528 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:23.677589 | orchestrator | 2026-01-05 00:15:23.677599 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-05 00:15:23.719090 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:15:23.719202 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:15:23.719217 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:15:23.719229 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:15:25.846690 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:25.846811 | orchestrator | 2026-01-05 00:15:25.846829 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-05 00:15:35.152836 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-05 00:15:35.152962 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-05 00:15:35.152978 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-05 00:15:35.152989 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-05 00:15:35.153007 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-05 00:15:35.153016 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-05 00:15:35.153028 | orchestrator | 2026-01-05 00:15:35.153041 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-05 00:15:36.262694 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:36.262832 | orchestrator | 2026-01-05 00:15:36.262851 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-05 00:15:36.308325 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:36.308430 | orchestrator | 2026-01-05 00:15:36.308447 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-05 00:15:39.453674 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:39.453787 | orchestrator | 2026-01-05 00:15:39.453805 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-05 00:15:39.498970 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:15:39.499073 | orchestrator | 2026-01-05 00:15:39.499092 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-05 00:17:20.934402 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:20.934548 | orchestrator | 2026-01-05 00:17:20.934568 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:17:22.201463 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:22.201533 | orchestrator | 2026-01-05 00:17:22.201542 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:17:22.201550 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-05 00:17:22.201556 | orchestrator | 2026-01-05 00:17:22.433014 | orchestrator | ok: Runtime: 0:02:25.671378 2026-01-05 00:17:22.441672 | 2026-01-05 00:17:22.441802 | TASK [Reboot manager] 2026-01-05 00:17:23.976216 | orchestrator | ok: Runtime: 0:00:01.068445 2026-01-05 00:17:23.990495 | 2026-01-05 00:17:23.990662 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:17:40.408937 | orchestrator | ok 2026-01-05 00:17:40.421606 | 2026-01-05 00:17:40.421762 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:18:40.474626 | orchestrator | ok 2026-01-05 00:18:40.485987 | 2026-01-05 00:18:40.486141 | TASK [Deploy manager + bootstrap nodes] 2026-01-05 00:18:43.193658 | orchestrator | 2026-01-05 00:18:43.193973 | orchestrator | # DEPLOY MANAGER 2026-01-05 00:18:43.194092 | orchestrator | 2026-01-05 00:18:43.194127 | orchestrator | + set -e 2026-01-05 00:18:43.194150 | orchestrator | + echo 2026-01-05 00:18:43.194173 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-05 00:18:43.194198 | orchestrator | + echo 2026-01-05 00:18:43.194280 | orchestrator | + cat /opt/manager-vars.sh 2026-01-05 00:18:43.196816 | orchestrator | export NUMBER_OF_NODES=6 2026-01-05 00:18:43.196903 | orchestrator | 2026-01-05 00:18:43.196921 | orchestrator | export CEPH_VERSION=reef 2026-01-05 00:18:43.196936 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-05 00:18:43.196949 | orchestrator | export MANAGER_VERSION=latest 2026-01-05 00:18:43.196978 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-05 00:18:43.196990 | orchestrator | 2026-01-05 00:18:43.197009 | orchestrator | export ARA=false 2026-01-05 00:18:43.197021 | orchestrator | export DEPLOY_MODE=manager 2026-01-05 00:18:43.197038 | orchestrator | export TEMPEST=true 2026-01-05 00:18:43.197050 | orchestrator | export IS_ZUUL=true 2026-01-05 00:18:43.197061 | orchestrator | 2026-01-05 00:18:43.197080 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:18:43.197092 | orchestrator | export EXTERNAL_API=false 2026-01-05 00:18:43.197103 | orchestrator | 2026-01-05 00:18:43.197114 | orchestrator | export IMAGE_USER=ubuntu 2026-01-05 00:18:43.197129 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-05 00:18:43.197140 | orchestrator | 2026-01-05 00:18:43.197151 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-05 00:18:43.197178 | orchestrator | 2026-01-05 00:18:43.197213 | orchestrator | + echo 2026-01-05 00:18:43.197227 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:18:43.198076 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:18:43.198138 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:18:43.198154 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:18:43.198196 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:18:43.198241 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:18:43.198291 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:18:43.198304 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:18:43.198320 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:18:43.198331 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:18:43.198342 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:18:43.198353 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:18:43.198364 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 00:18:43.198375 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 00:18:43.198386 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:18:43.198409 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:18:43.198637 | orchestrator | ++ export ARA=false 2026-01-05 00:18:43.198656 | orchestrator | ++ ARA=false 2026-01-05 00:18:43.198668 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:18:43.198679 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:18:43.198689 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:18:43.198700 | orchestrator | ++ TEMPEST=true 2026-01-05 00:18:43.198711 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:18:43.198722 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:18:43.198732 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:18:43.198743 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:18:43.198754 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:18:43.198765 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:18:43.198776 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:18:43.198786 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:18:43.198797 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:18:43.198808 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:18:43.198819 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:18:43.198829 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:18:43.198840 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-05 00:18:43.251147 | orchestrator | + docker version 2026-01-05 00:18:43.547400 | orchestrator | Client: Docker Engine - Community 2026-01-05 00:18:43.547540 | orchestrator | Version: 27.5.1 2026-01-05 00:18:43.547558 | orchestrator | API version: 1.47 2026-01-05 00:18:43.547573 | orchestrator | Go version: go1.22.11 2026-01-05 00:18:43.547585 | orchestrator | Git commit: 9f9e405 2026-01-05 00:18:43.547595 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:18:43.547609 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:18:43.547620 | orchestrator | Context: default 2026-01-05 00:18:43.547631 | orchestrator | 2026-01-05 00:18:43.547644 | orchestrator | Server: Docker Engine - Community 2026-01-05 00:18:43.547651 | orchestrator | Engine: 2026-01-05 00:18:43.547658 | orchestrator | Version: 27.5.1 2026-01-05 00:18:43.547666 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-05 00:18:43.547702 | orchestrator | Go version: go1.22.11 2026-01-05 00:18:43.547710 | orchestrator | Git commit: 4c9b3b0 2026-01-05 00:18:43.547716 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:18:43.547723 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:18:43.547743 | orchestrator | Experimental: false 2026-01-05 00:18:43.547750 | orchestrator | containerd: 2026-01-05 00:18:43.547757 | orchestrator | Version: v2.2.1 2026-01-05 00:18:43.547764 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-05 00:18:43.547771 | orchestrator | runc: 2026-01-05 00:18:43.547778 | orchestrator | Version: 1.3.4 2026-01-05 00:18:43.547785 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-05 00:18:43.547791 | orchestrator | docker-init: 2026-01-05 00:18:43.547798 | orchestrator | Version: 0.19.0 2026-01-05 00:18:43.547805 | orchestrator | GitCommit: de40ad0 2026-01-05 00:18:43.552991 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-05 00:18:43.563552 | orchestrator | + set -e 2026-01-05 00:18:43.563610 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:18:43.563618 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:18:43.563626 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:18:43.563632 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:18:43.563637 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:18:43.563643 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:18:43.563650 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:18:43.563655 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 00:18:43.563661 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 00:18:43.563667 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:18:43.563672 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:18:43.563677 | orchestrator | ++ export ARA=false 2026-01-05 00:18:43.563683 | orchestrator | ++ ARA=false 2026-01-05 00:18:43.563688 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:18:43.563694 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:18:43.563700 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:18:43.563705 | orchestrator | ++ TEMPEST=true 2026-01-05 00:18:43.563711 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:18:43.563716 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:18:43.563721 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:18:43.563727 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:18:43.563732 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:18:43.563738 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:18:43.563743 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:18:43.563748 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:18:43.563754 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:18:43.563759 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:18:43.563764 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:18:43.563770 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:18:43.563775 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:18:43.563781 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:18:43.563786 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:18:43.563791 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:18:43.563801 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:18:43.563807 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 00:18:43.563812 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:18:43.563818 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-05 00:18:43.571616 | orchestrator | + set -e 2026-01-05 00:18:43.571678 | orchestrator | + VERSION=reef 2026-01-05 00:18:43.573182 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:18:43.580271 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-05 00:18:43.580356 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:18:43.586682 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-05 00:18:43.596000 | orchestrator | + set -e 2026-01-05 00:18:43.596103 | orchestrator | + VERSION=2024.2 2026-01-05 00:18:43.596942 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:18:43.601286 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-05 00:18:43.601321 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:18:43.606722 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-05 00:18:43.607573 | orchestrator | ++ semver latest 7.0.0 2026-01-05 00:18:43.688482 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:18:43.688580 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:18:43.688594 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-05 00:18:43.689594 | orchestrator | ++ semver latest 10.0.0-0 2026-01-05 00:18:43.756101 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:18:43.756281 | orchestrator | ++ semver 2024.2 2025.1 2026-01-05 00:18:43.822665 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:18:43.822781 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-05 00:18:43.920182 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:18:43.922927 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 00:18:43.924657 | orchestrator | ++ deactivate nondestructive 2026-01-05 00:18:43.924682 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:18:43.924694 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:18:43.924739 | orchestrator | ++ hash -r 2026-01-05 00:18:43.924968 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:18:43.924985 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 00:18:43.924997 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 00:18:43.925011 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 00:18:43.925325 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 00:18:43.925344 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 00:18:43.925355 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 00:18:43.925366 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 00:18:43.925384 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:18:43.925397 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:18:43.925408 | orchestrator | ++ export PATH 2026-01-05 00:18:43.925420 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:18:43.925557 | orchestrator | ++ '[' -z '' ']' 2026-01-05 00:18:43.925572 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 00:18:43.925584 | orchestrator | ++ PS1='(venv) ' 2026-01-05 00:18:43.925595 | orchestrator | ++ export PS1 2026-01-05 00:18:43.925626 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 00:18:43.925644 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 00:18:43.925659 | orchestrator | ++ hash -r 2026-01-05 00:18:43.926218 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-05 00:18:45.498785 | orchestrator | 2026-01-05 00:18:45.498907 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-05 00:18:45.498919 | orchestrator | 2026-01-05 00:18:45.498954 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:18:46.107722 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:46.107858 | orchestrator | 2026-01-05 00:18:46.107876 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:18:47.148159 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:47.148388 | orchestrator | 2026-01-05 00:18:47.148418 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-05 00:18:47.148441 | orchestrator | 2026-01-05 00:18:47.148460 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:18:49.693074 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:49.693205 | orchestrator | 2026-01-05 00:18:49.693221 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-05 00:18:49.753394 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:49.753514 | orchestrator | 2026-01-05 00:18:49.753532 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-05 00:18:50.234661 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:50.234793 | orchestrator | 2026-01-05 00:18:50.234812 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-05 00:18:50.278836 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:18:50.278948 | orchestrator | 2026-01-05 00:18:50.278962 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:18:50.654414 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:50.654544 | orchestrator | 2026-01-05 00:18:50.654561 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-05 00:18:50.715315 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:18:50.715432 | orchestrator | 2026-01-05 00:18:50.715447 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-05 00:18:51.061842 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:51.061954 | orchestrator | 2026-01-05 00:18:51.061972 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-05 00:18:51.195466 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:18:51.195575 | orchestrator | 2026-01-05 00:18:51.195589 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-05 00:18:51.195602 | orchestrator | 2026-01-05 00:18:51.195614 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:18:52.926521 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:52.926645 | orchestrator | 2026-01-05 00:18:52.926661 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-05 00:18:53.012086 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-05 00:18:53.012188 | orchestrator | 2026-01-05 00:18:53.012202 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-05 00:18:53.070515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-05 00:18:53.070594 | orchestrator | 2026-01-05 00:18:53.070608 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-05 00:18:54.081585 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-05 00:18:54.081739 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-05 00:18:54.081766 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-05 00:18:54.081785 | orchestrator | 2026-01-05 00:18:54.081807 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-05 00:18:55.773814 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-05 00:18:55.773962 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-05 00:18:55.773983 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-05 00:18:55.773997 | orchestrator | 2026-01-05 00:18:55.774010 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-05 00:18:56.395311 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:18:56.395427 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:56.395443 | orchestrator | 2026-01-05 00:18:56.395457 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-05 00:18:57.001558 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:18:57.001673 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:57.001688 | orchestrator | 2026-01-05 00:18:57.001701 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-05 00:18:57.059542 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:18:57.059663 | orchestrator | 2026-01-05 00:18:57.059682 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-05 00:18:57.419297 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:57.419422 | orchestrator | 2026-01-05 00:18:57.419437 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-05 00:18:57.495433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-05 00:18:57.495569 | orchestrator | 2026-01-05 00:18:57.495591 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-05 00:18:58.566180 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:58.566365 | orchestrator | 2026-01-05 00:18:58.566402 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-05 00:18:59.467007 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:59.467129 | orchestrator | 2026-01-05 00:18:59.467146 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-05 00:19:10.005486 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:10.005618 | orchestrator | 2026-01-05 00:19:10.005637 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-05 00:19:10.047084 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:10.047147 | orchestrator | 2026-01-05 00:19:10.047160 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-05 00:19:10.047172 | orchestrator | 2026-01-05 00:19:10.047214 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:19:11.954370 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:11.954483 | orchestrator | 2026-01-05 00:19:11.954497 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-05 00:19:12.117421 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-05 00:19:12.117531 | orchestrator | 2026-01-05 00:19:12.117547 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-05 00:19:12.194478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:19:12.194579 | orchestrator | 2026-01-05 00:19:12.194594 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-05 00:19:15.196725 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:15.196892 | orchestrator | 2026-01-05 00:19:15.196910 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-05 00:19:15.254209 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:15.254343 | orchestrator | 2026-01-05 00:19:15.254357 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-05 00:19:15.414730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-05 00:19:15.414848 | orchestrator | 2026-01-05 00:19:15.414865 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-05 00:19:18.448169 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-05 00:19:18.448329 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-05 00:19:18.448344 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-05 00:19:18.448355 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-05 00:19:18.448366 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-05 00:19:18.448376 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-05 00:19:18.448386 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-05 00:19:18.448396 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-05 00:19:18.448407 | orchestrator | 2026-01-05 00:19:18.448418 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-05 00:19:19.148136 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:19.148286 | orchestrator | 2026-01-05 00:19:19.148301 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-05 00:19:19.803787 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:19.803891 | orchestrator | 2026-01-05 00:19:19.803901 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-05 00:19:19.892595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-05 00:19:19.892704 | orchestrator | 2026-01-05 00:19:19.892721 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-05 00:19:21.194725 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-05 00:19:21.194872 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-05 00:19:21.194887 | orchestrator | 2026-01-05 00:19:21.194899 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-05 00:19:21.892439 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:21.892564 | orchestrator | 2026-01-05 00:19:21.892580 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-05 00:19:21.953973 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:21.954101 | orchestrator | 2026-01-05 00:19:21.954110 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-05 00:19:22.038116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-05 00:19:22.038366 | orchestrator | 2026-01-05 00:19:22.038407 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-05 00:19:22.669746 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:22.669838 | orchestrator | 2026-01-05 00:19:22.669874 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-05 00:19:22.736640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-05 00:19:22.736764 | orchestrator | 2026-01-05 00:19:22.736781 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-05 00:19:24.136678 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:19:24.136781 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:19:24.136795 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:24.136808 | orchestrator | 2026-01-05 00:19:24.136820 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-05 00:19:24.764487 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:24.764598 | orchestrator | 2026-01-05 00:19:24.764614 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-05 00:19:24.826586 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:24.826686 | orchestrator | 2026-01-05 00:19:24.826701 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-05 00:19:24.920625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-05 00:19:24.920716 | orchestrator | 2026-01-05 00:19:24.920759 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-05 00:19:25.484761 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:25.484878 | orchestrator | 2026-01-05 00:19:25.484892 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-05 00:19:25.906962 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:25.907097 | orchestrator | 2026-01-05 00:19:25.907124 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-05 00:19:27.147834 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-05 00:19:27.147950 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-05 00:19:27.147967 | orchestrator | 2026-01-05 00:19:27.147981 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-05 00:19:27.798813 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:27.798950 | orchestrator | 2026-01-05 00:19:27.798977 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-05 00:19:28.206372 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:28.206484 | orchestrator | 2026-01-05 00:19:28.206501 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-05 00:19:28.567687 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:28.567800 | orchestrator | 2026-01-05 00:19:28.567817 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-05 00:19:28.600626 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:28.600708 | orchestrator | 2026-01-05 00:19:28.600722 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-05 00:19:28.697957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-05 00:19:28.698125 | orchestrator | 2026-01-05 00:19:28.698142 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-05 00:19:28.739301 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:28.739405 | orchestrator | 2026-01-05 00:19:28.739419 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-05 00:19:30.806163 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-05 00:19:30.806318 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-05 00:19:30.806352 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-05 00:19:30.806366 | orchestrator | 2026-01-05 00:19:30.806380 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-05 00:19:31.545788 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:31.545931 | orchestrator | 2026-01-05 00:19:31.545961 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-05 00:19:32.300916 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:32.301030 | orchestrator | 2026-01-05 00:19:32.301048 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-05 00:19:33.050536 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:33.050652 | orchestrator | 2026-01-05 00:19:33.050669 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-05 00:19:33.117192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-05 00:19:33.117321 | orchestrator | 2026-01-05 00:19:33.117335 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-05 00:19:33.173352 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:33.173454 | orchestrator | 2026-01-05 00:19:33.173468 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-05 00:19:33.911487 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-05 00:19:33.911603 | orchestrator | 2026-01-05 00:19:33.911620 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-05 00:19:34.005130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-05 00:19:34.005241 | orchestrator | 2026-01-05 00:19:34.005284 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-05 00:19:34.744957 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:34.745074 | orchestrator | 2026-01-05 00:19:34.745092 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-05 00:19:35.365944 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:35.366120 | orchestrator | 2026-01-05 00:19:35.366140 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-05 00:19:35.428112 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:35.428208 | orchestrator | 2026-01-05 00:19:35.428220 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-05 00:19:35.486883 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:35.487009 | orchestrator | 2026-01-05 00:19:35.487032 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-05 00:19:36.351707 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:36.351848 | orchestrator | 2026-01-05 00:19:36.351875 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-05 00:20:46.427929 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:46.427991 | orchestrator | 2026-01-05 00:20:46.428004 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-05 00:20:48.452209 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:48.452336 | orchestrator | 2026-01-05 00:20:48.452406 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-05 00:20:48.506580 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:48.506668 | orchestrator | 2026-01-05 00:20:48.506676 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-05 00:20:51.172383 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:51.172530 | orchestrator | 2026-01-05 00:20:51.172574 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-05 00:20:51.236913 | orchestrator | ok: [testbed-manager] 2026-01-05 00:20:51.237027 | orchestrator | 2026-01-05 00:20:51.237042 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:20:51.237055 | orchestrator | 2026-01-05 00:20:51.237067 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-05 00:20:51.291886 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:20:51.291989 | orchestrator | 2026-01-05 00:20:51.292004 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-05 00:21:51.345577 | orchestrator | Pausing for 60 seconds 2026-01-05 00:21:51.345688 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:51.345700 | orchestrator | 2026-01-05 00:21:51.345709 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-05 00:21:53.911159 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:53.911300 | orchestrator | 2026-01-05 00:21:53.911318 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-05 00:22:56.039978 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-05 00:22:56.040182 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-05 00:22:56.040214 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-05 00:22:56.040233 | orchestrator | changed: [testbed-manager] 2026-01-05 00:22:56.040252 | orchestrator | 2026-01-05 00:22:56.040272 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-05 00:23:07.288366 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:07.288540 | orchestrator | 2026-01-05 00:23:07.288559 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-05 00:23:07.380331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-05 00:23:07.380436 | orchestrator | 2026-01-05 00:23:07.380451 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:23:07.380464 | orchestrator | 2026-01-05 00:23:07.380477 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-05 00:23:07.446109 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:07.446223 | orchestrator | 2026-01-05 00:23:07.446238 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-05 00:23:07.512035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-05 00:23:07.512149 | orchestrator | 2026-01-05 00:23:07.512163 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-05 00:23:08.299668 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:08.299787 | orchestrator | 2026-01-05 00:23:08.299804 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-05 00:23:11.589324 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:11.589471 | orchestrator | 2026-01-05 00:23:11.589488 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-05 00:23:11.672243 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:23:11.672359 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-05 00:23:11.672378 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-05 00:23:11.672390 | orchestrator | "Checking running containers against expected versions...", 2026-01-05 00:23:11.672403 | orchestrator | "", 2026-01-05 00:23:11.672417 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-05 00:23:11.672429 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-05 00:23:11.672441 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672453 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-05 00:23:11.672464 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672476 | orchestrator | "", 2026-01-05 00:23:11.672488 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-05 00:23:11.672529 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-05 00:23:11.672540 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672551 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-05 00:23:11.672561 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672572 | orchestrator | "", 2026-01-05 00:23:11.672583 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-05 00:23:11.672594 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-05 00:23:11.672605 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672615 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-05 00:23:11.672626 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672637 | orchestrator | "", 2026-01-05 00:23:11.672648 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-05 00:23:11.672659 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-05 00:23:11.672670 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672681 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-05 00:23:11.672718 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672729 | orchestrator | "", 2026-01-05 00:23:11.672740 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-05 00:23:11.672751 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-05 00:23:11.672765 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672778 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-05 00:23:11.672790 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672803 | orchestrator | "", 2026-01-05 00:23:11.672816 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-05 00:23:11.672830 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.672842 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672856 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.672869 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672882 | orchestrator | "", 2026-01-05 00:23:11.672895 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-05 00:23:11.672907 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:23:11.672920 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.672932 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:23:11.672945 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.672957 | orchestrator | "", 2026-01-05 00:23:11.672969 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-05 00:23:11.672982 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:23:11.672994 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673015 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:23:11.673034 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673047 | orchestrator | "", 2026-01-05 00:23:11.673061 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-05 00:23:11.673074 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-05 00:23:11.673086 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673099 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-05 00:23:11.673112 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673123 | orchestrator | "", 2026-01-05 00:23:11.673134 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-05 00:23:11.673145 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:23:11.673156 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673166 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:23:11.673177 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673188 | orchestrator | "", 2026-01-05 00:23:11.673199 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-05 00:23:11.673210 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673221 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673232 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673243 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673253 | orchestrator | "", 2026-01-05 00:23:11.673264 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-05 00:23:11.673275 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673286 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673296 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673307 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673318 | orchestrator | "", 2026-01-05 00:23:11.673329 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-05 00:23:11.673340 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673350 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673361 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673372 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673383 | orchestrator | "", 2026-01-05 00:23:11.673394 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-05 00:23:11.673412 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673423 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673434 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673459 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673480 | orchestrator | "", 2026-01-05 00:23:11.673519 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-05 00:23:11.673550 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673561 | orchestrator | " Enabled: true", 2026-01-05 00:23:11.673572 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 00:23:11.673583 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:23:11.673594 | orchestrator | "", 2026-01-05 00:23:11.673605 | orchestrator | "=== Summary ===", 2026-01-05 00:23:11.673615 | orchestrator | "Errors (version mismatches): 0", 2026-01-05 00:23:11.673626 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-05 00:23:11.673637 | orchestrator | "", 2026-01-05 00:23:11.673648 | orchestrator | "✅ All running containers match expected versions!" 2026-01-05 00:23:11.673659 | orchestrator | ] 2026-01-05 00:23:11.673670 | orchestrator | } 2026-01-05 00:23:11.673682 | orchestrator | 2026-01-05 00:23:11.673693 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-05 00:23:11.720828 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:11.720951 | orchestrator | 2026-01-05 00:23:11.720969 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:11.720983 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-05 00:23:11.720996 | orchestrator | 2026-01-05 00:23:11.824693 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:23:11.824850 | orchestrator | + deactivate 2026-01-05 00:23:11.824862 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 00:23:11.824872 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:23:11.824878 | orchestrator | + export PATH 2026-01-05 00:23:11.824885 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 00:23:11.824892 | orchestrator | + '[' -n '' ']' 2026-01-05 00:23:11.824917 | orchestrator | + hash -r 2026-01-05 00:23:11.824998 | orchestrator | + '[' -n '' ']' 2026-01-05 00:23:11.825012 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 00:23:11.825022 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 00:23:11.825031 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 00:23:11.825038 | orchestrator | + unset -f deactivate 2026-01-05 00:23:11.825045 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-05 00:23:11.834372 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:23:11.834427 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:23:11.834436 | orchestrator | + local max_attempts=60 2026-01-05 00:23:11.834443 | orchestrator | + local name=ceph-ansible 2026-01-05 00:23:11.834450 | orchestrator | + local attempt_num=1 2026-01-05 00:23:11.835790 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:23:11.874394 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:23:11.874565 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:23:11.874591 | orchestrator | + local max_attempts=60 2026-01-05 00:23:11.874614 | orchestrator | + local name=kolla-ansible 2026-01-05 00:23:11.874642 | orchestrator | + local attempt_num=1 2026-01-05 00:23:11.874921 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:23:11.911399 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:23:11.911529 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:23:11.911547 | orchestrator | + local max_attempts=60 2026-01-05 00:23:11.911559 | orchestrator | + local name=osism-ansible 2026-01-05 00:23:11.911570 | orchestrator | + local attempt_num=1 2026-01-05 00:23:11.912881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:23:11.956035 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:23:11.956149 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:23:11.956163 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:23:12.697848 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-05 00:23:12.891148 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-05 00:23:12.891260 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:23:12.891274 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:23:12.891284 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-05 00:23:12.891296 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-05 00:23:12.891307 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:23:12.891317 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:23:12.891326 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-05 00:23:12.891358 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:23:12.891368 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-05 00:23:12.891378 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:23:12.891387 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-05 00:23:12.891397 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:23:12.891407 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-05 00:23:12.891416 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-05 00:23:12.891426 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:23:12.898723 | orchestrator | ++ semver latest 7.0.0 2026-01-05 00:23:12.955823 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:23:12.955909 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:23:12.955924 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-05 00:23:12.960708 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-05 00:23:25.317166 | orchestrator | 2026-01-05 00:23:25 | INFO  | Task c0996ec8-a118-421a-9200-242093656474 (resolvconf) was prepared for execution. 2026-01-05 00:23:25.317326 | orchestrator | 2026-01-05 00:23:25 | INFO  | It takes a moment until task c0996ec8-a118-421a-9200-242093656474 (resolvconf) has been started and output is visible here. 2026-01-05 00:23:39.894303 | orchestrator | 2026-01-05 00:23:39.894449 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-05 00:23:39.894470 | orchestrator | 2026-01-05 00:23:39.894483 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:23:39.894495 | orchestrator | Monday 05 January 2026 00:23:29 +0000 (0:00:00.142) 0:00:00.142 ******** 2026-01-05 00:23:39.894507 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:39.894579 | orchestrator | 2026-01-05 00:23:39.894594 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:23:39.894606 | orchestrator | Monday 05 January 2026 00:23:33 +0000 (0:00:03.890) 0:00:04.033 ******** 2026-01-05 00:23:39.894617 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:39.894629 | orchestrator | 2026-01-05 00:23:39.894640 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:23:39.894654 | orchestrator | Monday 05 January 2026 00:23:33 +0000 (0:00:00.071) 0:00:04.105 ******** 2026-01-05 00:23:39.894674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-05 00:23:39.894687 | orchestrator | 2026-01-05 00:23:39.894698 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:23:39.894709 | orchestrator | Monday 05 January 2026 00:23:33 +0000 (0:00:00.077) 0:00:04.182 ******** 2026-01-05 00:23:39.894721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:23:39.894732 | orchestrator | 2026-01-05 00:23:39.894745 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:23:39.894776 | orchestrator | Monday 05 January 2026 00:23:33 +0000 (0:00:00.086) 0:00:04.269 ******** 2026-01-05 00:23:39.894791 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:39.894805 | orchestrator | 2026-01-05 00:23:39.894818 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:23:39.894830 | orchestrator | Monday 05 January 2026 00:23:34 +0000 (0:00:01.174) 0:00:05.444 ******** 2026-01-05 00:23:39.894843 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:39.894856 | orchestrator | 2026-01-05 00:23:39.894869 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:23:39.894881 | orchestrator | Monday 05 January 2026 00:23:34 +0000 (0:00:00.068) 0:00:05.512 ******** 2026-01-05 00:23:39.894892 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:39.894903 | orchestrator | 2026-01-05 00:23:39.894913 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:23:39.894924 | orchestrator | Monday 05 January 2026 00:23:35 +0000 (0:00:00.549) 0:00:06.061 ******** 2026-01-05 00:23:39.894937 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:39.894956 | orchestrator | 2026-01-05 00:23:39.894975 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:23:39.894989 | orchestrator | Monday 05 January 2026 00:23:35 +0000 (0:00:00.073) 0:00:06.135 ******** 2026-01-05 00:23:39.895000 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:39.895010 | orchestrator | 2026-01-05 00:23:39.895021 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:23:39.895032 | orchestrator | Monday 05 January 2026 00:23:36 +0000 (0:00:00.582) 0:00:06.717 ******** 2026-01-05 00:23:39.895043 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:39.895053 | orchestrator | 2026-01-05 00:23:39.895064 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:23:39.895075 | orchestrator | Monday 05 January 2026 00:23:37 +0000 (0:00:01.136) 0:00:07.854 ******** 2026-01-05 00:23:39.895109 | orchestrator | ok: [testbed-manager] 2026-01-05 00:23:39.895121 | orchestrator | 2026-01-05 00:23:39.895133 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:23:39.895143 | orchestrator | Monday 05 January 2026 00:23:38 +0000 (0:00:00.969) 0:00:08.823 ******** 2026-01-05 00:23:39.895154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-05 00:23:39.895165 | orchestrator | 2026-01-05 00:23:39.895176 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:23:39.895187 | orchestrator | Monday 05 January 2026 00:23:38 +0000 (0:00:00.096) 0:00:08.920 ******** 2026-01-05 00:23:39.895198 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:39.895209 | orchestrator | 2026-01-05 00:23:39.895220 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:39.895232 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:23:39.895243 | orchestrator | 2026-01-05 00:23:39.895254 | orchestrator | 2026-01-05 00:23:39.895265 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:23:39.895275 | orchestrator | Monday 05 January 2026 00:23:39 +0000 (0:00:01.227) 0:00:10.147 ******** 2026-01-05 00:23:39.895286 | orchestrator | =============================================================================== 2026-01-05 00:23:39.895297 | orchestrator | Gathering Facts --------------------------------------------------------- 3.89s 2026-01-05 00:23:39.895308 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.23s 2026-01-05 00:23:39.895318 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.17s 2026-01-05 00:23:39.895329 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2026-01-05 00:23:39.895340 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2026-01-05 00:23:39.895353 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-01-05 00:23:39.895397 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-01-05 00:23:39.895416 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-01-05 00:23:39.895436 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-05 00:23:39.895455 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-05 00:23:39.895469 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-01-05 00:23:39.895479 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-05 00:23:39.895490 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-05 00:23:40.230485 | orchestrator | + osism apply sshconfig 2026-01-05 00:23:52.413659 | orchestrator | 2026-01-05 00:23:52 | INFO  | Task 38a47498-cd8c-4e08-9d11-7bd22c06850f (sshconfig) was prepared for execution. 2026-01-05 00:23:52.413807 | orchestrator | 2026-01-05 00:23:52 | INFO  | It takes a moment until task 38a47498-cd8c-4e08-9d11-7bd22c06850f (sshconfig) has been started and output is visible here. 2026-01-05 00:24:04.665924 | orchestrator | 2026-01-05 00:24:04.666115 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-05 00:24:04.666133 | orchestrator | 2026-01-05 00:24:04.666145 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-05 00:24:04.666157 | orchestrator | Monday 05 January 2026 00:23:56 +0000 (0:00:00.162) 0:00:00.162 ******** 2026-01-05 00:24:04.666169 | orchestrator | ok: [testbed-manager] 2026-01-05 00:24:04.666181 | orchestrator | 2026-01-05 00:24:04.666193 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-05 00:24:04.666204 | orchestrator | Monday 05 January 2026 00:23:57 +0000 (0:00:00.555) 0:00:00.718 ******** 2026-01-05 00:24:04.666246 | orchestrator | changed: [testbed-manager] 2026-01-05 00:24:04.666259 | orchestrator | 2026-01-05 00:24:04.666270 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-05 00:24:04.666281 | orchestrator | Monday 05 January 2026 00:23:57 +0000 (0:00:00.544) 0:00:01.263 ******** 2026-01-05 00:24:04.666292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:24:04.666304 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:24:04.666315 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:24:04.666326 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:24:04.666337 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:24:04.666348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:24:04.666359 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:24:04.666370 | orchestrator | 2026-01-05 00:24:04.666381 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-05 00:24:04.666392 | orchestrator | Monday 05 January 2026 00:24:03 +0000 (0:00:05.886) 0:00:07.149 ******** 2026-01-05 00:24:04.666403 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:24:04.666414 | orchestrator | 2026-01-05 00:24:04.666425 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-05 00:24:04.666437 | orchestrator | Monday 05 January 2026 00:24:03 +0000 (0:00:00.070) 0:00:07.219 ******** 2026-01-05 00:24:04.666448 | orchestrator | changed: [testbed-manager] 2026-01-05 00:24:04.666461 | orchestrator | 2026-01-05 00:24:04.666474 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:24:04.666489 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:24:04.666503 | orchestrator | 2026-01-05 00:24:04.666516 | orchestrator | 2026-01-05 00:24:04.666529 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:24:04.666543 | orchestrator | Monday 05 January 2026 00:24:04 +0000 (0:00:00.603) 0:00:07.823 ******** 2026-01-05 00:24:04.666620 | orchestrator | =============================================================================== 2026-01-05 00:24:04.666634 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.89s 2026-01-05 00:24:04.666646 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-01-05 00:24:04.666656 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-01-05 00:24:04.666668 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-01-05 00:24:04.666679 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-05 00:24:05.018292 | orchestrator | + osism apply known-hosts 2026-01-05 00:24:17.205688 | orchestrator | 2026-01-05 00:24:17 | INFO  | Task 65733bbc-bb5a-415f-94eb-8711d1bda6cd (known-hosts) was prepared for execution. 2026-01-05 00:24:17.205819 | orchestrator | 2026-01-05 00:24:17 | INFO  | It takes a moment until task 65733bbc-bb5a-415f-94eb-8711d1bda6cd (known-hosts) has been started and output is visible here. 2026-01-05 00:24:34.559977 | orchestrator | 2026-01-05 00:24:34.560114 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-05 00:24:34.560132 | orchestrator | 2026-01-05 00:24:34.560145 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-05 00:24:34.560158 | orchestrator | Monday 05 January 2026 00:24:21 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-01-05 00:24:34.560171 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:24:34.560184 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:24:34.560205 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:24:34.560221 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:24:34.560278 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:24:34.560302 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:24:34.560321 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:24:34.560339 | orchestrator | 2026-01-05 00:24:34.560357 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-05 00:24:34.560378 | orchestrator | Monday 05 January 2026 00:24:27 +0000 (0:00:06.126) 0:00:06.298 ******** 2026-01-05 00:24:34.560400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:24:34.560422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:24:34.560456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:24:34.560478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:24:34.560498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:24:34.560519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:24:34.560537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:24:34.560554 | orchestrator | 2026-01-05 00:24:34.560568 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:34.560615 | orchestrator | Monday 05 January 2026 00:24:27 +0000 (0:00:00.176) 0:00:06.474 ******** 2026-01-05 00:24:34.560630 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWYqPZ66xqs36B78ovI3nm7FWGQoXMod8gwRJLUhTJdxCV6SGAPa4VJ75RYAPIF61KHEVvfNF+rWYnNn3/+i24=) 2026-01-05 00:24:34.560649 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6UNAx/5rzaKbp7bjGOk+DcUZ/OIVgYELg+YnCBwwTKAEZApXKmRlKEXDiUX3I7DTLhpXbEOxMLgCgVE6GHR0HFuuYwcZ0j4JJCjm2nxsgsdF7urmdF8HIbs/O4+LkiaOiLsp8sS9Lm3ndUW9o4YTNjYqJedMVgnQcJYeIBsGxmmRFDXfc9ukFR6hM3FOiW6HDzsKHhh5AcoOynnhps/0mEgOKHsLKcmCaR4R5i7tHo+kEjGGdaH4ELIWUSFdIluDbO7OmKAPL/xrKJAI8a1bhf6b15I3A1Qw2UFsTr5FdRmxzSiBWsG/HOYT5T4ZsftrHnCjCmoutp0HoHk/Y5hMad6vySRYCrR8nCfoLL34WGA3sgvNmQKNXlgayhig0yue9oyid5lgLr5+WAH0J+9igAEKq/88KdwPldboA82AYnNfVDlB4qApmxrblC5Qys73ztci4l0Ycp9K9dwE2ihfdGGy4g5AVWU6N0IgawKT3CVCAoI0qCAwpHaRF4ovn3zE=) 2026-01-05 00:24:34.560673 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy013DiESaPktfQGrCa1ai7LeQdNg5QldIRLAkXP8hf) 2026-01-05 00:24:34.560687 | orchestrator | 2026-01-05 00:24:34.560701 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:34.560714 | orchestrator | Monday 05 January 2026 00:24:29 +0000 (0:00:01.235) 0:00:07.709 ******** 2026-01-05 00:24:34.560749 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVFR0dp+Tiw6sgGqoWU6Lmh+Lu0cLkeSYKYUO7xBsjYVYwnHsC6qBnQdLM2EgDhd5nz0oG/1wdR/oK+9R3ZGFYjj7T58ubA5+bwVXqe3dHEob93scLshnmb5LOnebQ5A3N4Ev6Ym5yop4haP67wf80Nzmai00gPH9rM8MakpZ40ExlOilgtcNAAMin2ZkikxKhf58Tmw0ZnIMrL6miaSaX9Ld2ZRGdVisegX5BqqHT6CI/Ja69DlltHDaTbBV0rPPKPGlx+FL0RCyM6rlZ4pob7MNyVTbvyWU0LSg+pl22f381pr8mQYumDDfiqbSPkvZ52igJTYnR0AsalhDVpKx+hjY0ssxZywE7i/28Fe/lOKgCfwNUr65PEwLr7tRi263T2Z3EhwL5JNkzzt5JEE8boaEqnHhQgJgdAPglL4wikd5/ubTxmy+eNYQaWOAJDoALlKuljNv4PYPN8qsnWtNWd67f/bimU458uOpnqyNw5VMTQgegXvvvMdMtiw525uE=) 2026-01-05 00:24:34.560778 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGidapKLPCrXvK/GOsm4fYxpdaqcERb9EK2NS77lx72jMIG24leXFhBvURJOdSUsM4C0Car8tJhJnlljH000HcA=) 2026-01-05 00:24:34.560792 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEhJe41Pqol4JE39DZOPqRkXbA6FsFt2C4ZbSuQBb6aG) 2026-01-05 00:24:34.560805 | orchestrator | 2026-01-05 00:24:34.560818 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:34.560830 | orchestrator | Monday 05 January 2026 00:24:30 +0000 (0:00:01.103) 0:00:08.812 ******** 2026-01-05 00:24:34.560842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8bm9Mw9mBuIjNFjmCEsgQ1T7ctsgtYxOEbDAFrMpxkAC/QRHHegdEclG0GTXSQMqU+uMTuBJC6hnWzI+eF8p8AHFOby2HapK7lanav/u4UMgn1o+3yFcqgMVmdDTICJ7K/11GXUxlk+J4VTobYibV8zuy6EM0KSQpaXEeeCiOyLkbQ8KYW7Xe0gB2msKTUAKUYRIHUff74MjQqvRB9tBJCU5gIFPxe4q6/zRLupE0OEhv8ed11jpBhxQHh3+LYGq9x55A72Qe23Kxe/wuKeUTEiih4DNZlgwp2QwRB0NrG7p2Ej04j8O2F+PO2ytYOPrPsWHQGJOITPn1Qy7m21Y2RmrEtq0nBo5iOdqQVrGbRbQ+sHvniDWi32AXA6SOUopRG7AxdbA14MDjBNvdSX4vvq0SmMUJglASQ5c3exUkHhklbhiktOerjKGF9xXYjvc7NDLEmE92sQqTjfx7jtnXy3XmcseOSMlHia2cOiwNIy0nHpDkLODXwY+sZDcHznM=) 2026-01-05 00:24:34.560854 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOe92Fm37cP66ZcdJcrB1ZPdf0NTKyZoNwP3DO7DanPRm12wxrAAqgvT+0ZIrghodjnnDZnW2HAan/iwgzWt9Bg=) 2026-01-05 00:24:34.560865 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEV/sFdIpt4FG/JADiibbQth4ySAscV2ljWKkl8zh7HC) 2026-01-05 00:24:34.560876 | orchestrator | 2026-01-05 00:24:34.560887 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:34.560898 | orchestrator | Monday 05 January 2026 00:24:31 +0000 (0:00:01.117) 0:00:09.930 ******** 2026-01-05 00:24:34.560989 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ5LYaY0qwEQo8sGwTHRFJB9xC85J7d1w8KdoxWnGOGKAQWZC/B20MsTZpUe5XuwbJpIKkkIictNhMbuI4lgWvs=) 2026-01-05 00:24:34.561001 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtw0tBObKsXI/BMoqCTTVXgwkwTW2FpGVUrunpDVQ2/) 2026-01-05 00:24:34.561013 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsnKzkpSE9L5FuS96OxT/RppMXfVq6MRMPOiq/V41KtQFSLNHU+r3TMZzmA53TAFV/5WixHMZKmmBoxu4lxtwQJaQsClg1CVYLDXpjRDH/G3PKKKJIl96xI8yJAAoKxyc6MUUjjTOmnAayjfgU55CT1UcOXFeNsYMQJqISmlSDG9b3o8i8lforFTXKTWnchf1Qinvo81C6HO42k0giSKVLLsc8xgJvtvlPm5VA+7cCyamC+nQ/l+B9Bf45eO6Gj2ofzDj10s5LTRv319+uuvSwDPXsqWZJCWgcDtj5vjdlJBciKMuL6z26eDTYzkGalWwNo3nJ0EPt7VJZjDhCGww4q6nOc0EQQ2XWdcqSTqoZcdSPSceCPhOKJvKCNagHDydPhNBVndgXm8JEBxHTH6ErW63nxfHOhLa5EUUso+ykKFMvH8ew/8dRoBTpmDbh48TVBMlO18Bh/foH+fCxA3MJHejdP09A2GlqajRG+GNbfMbJhrDGKEgO/kOu69xZIhk=) 2026-01-05 00:24:34.561025 | orchestrator | 2026-01-05 00:24:34.561036 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:34.561047 | orchestrator | Monday 05 January 2026 00:24:32 +0000 (0:00:01.083) 0:00:11.013 ******** 2026-01-05 00:24:34.561058 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc0vrpZdTm9/mzVl+YtRvu8u2ER6WEoQ1okZqUyS7orKApGozEdN4qaBjovUReMUbU685B2orCfHC0rBGJBU2NlaqjBaQuVNHpNepE/b+84huzQGYD+zqI5urHb8JyZwJsya/Wu9n/QMCNRhVrew7xT2cSW8dytZWdrotng8+pJrYjRU7BLv2WBUxTV9GI9GCmQhlGgV5MRpK+KCxYdqtaiewn2uOjpmOCYIgrFOafKIHYkvh+CHecoPelXrLPNmpS9whuDYe1D9T7IXuBiRS8Xm6Z9PozN0AlowcpEHbmUjm+zd1wfMJp7qZRi9l52ESHH881qWrVIfP1lOzSfxBGiyrxlGiHO1/sSR8YFVs/cWiOHHkjrpjrh7/9O865WipFKWWeNNPA89wtiO+pq6YALDpZmgAc3WOVs/x16jbQ26gH1tsPyjZXoEDnYzJ5GvcW+QIpqaNi03e8cEBFrFCkJEnLpn6l3Zp36WUaDl/vIs4UCOhmbM4SxXysixef978=) 2026-01-05 00:24:34.561077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDZ0hZ26ITNObcuYZePj99grjP6DPaym07ES+uCB6RIn) 2026-01-05 00:24:34.561088 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaFYflCPegxkJB9lHVfzAUd4wQJ32zWSIZfi6mbW8C4sAG18zyW5Wj1U3Oo0Z9w1m2joaQsw1Hlyn4WGCdOCf8=) 2026-01-05 00:24:34.561100 | orchestrator | 2026-01-05 00:24:34.561111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:34.561121 | orchestrator | Monday 05 January 2026 00:24:33 +0000 (0:00:01.090) 0:00:12.104 ******** 2026-01-05 00:24:34.561142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC45s/9IsXyinZgsh2BJ4kpH0aV3PwhGsMqTcvmf6+xb9oIRbjARHTu8s8LT+SYOigs0NBcI8So0dO6WIta507B5N/j/2jVUPEMTBTpvYs0TdW+GjiD5M1rfEPBLbQKSw4bwWPZS73we8k0zDBmxPwmy4hf3g73/GO56T37HBynmvj1j0l8mB1vBCZcJG1z0S29adUHK5dj26Mz/6yyd9cKyzca2nIOURzcnUX1Eq4Da0mS8a/rxJtgy00myGurclnby9FAEdioGFXERlo6kT5z2G/v1+D5Ekum5Tl4aK/vMQ80TxrJzIYVBi7zYUJylHIWc5fEcNyKKgeecHusmcB8wkty2gRB1hE8JOVFaEGZNj7+zjauGBzs9GIolbMO6So/QkNKF4VNBPf+KFQEa2FN4LwltWkhCjNxeCkB4TO9xWpGOItqB60NtlXfNhX8HpiVPU5Mvhp+ICj+rCnrRhsjq7VfwXV8R4u/XF5Gf0a30I+pmdiC2CZ3NIhAocuDIc=) 2026-01-05 00:24:45.612816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB7nlSKDS+mdTyyi1FMKfnS1p1Z+TEw0fiI8dDoLqrXwzi+IZCmauY/iWZqTJTySlhnBS7/1jYMKnMMVvho1dS4=) 2026-01-05 00:24:45.612935 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtnNj+Kr1IKpV3cDO5JBdWPa3j1DlHDQADAu3oJ28Qx) 2026-01-05 00:24:45.612956 | orchestrator | 2026-01-05 00:24:45.612977 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:45.612990 | orchestrator | Monday 05 January 2026 00:24:34 +0000 (0:00:01.126) 0:00:13.230 ******** 2026-01-05 00:24:45.613004 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKDnzDYojxOp+2n7MulzBbG4m/5XZR1ni3vtfLI/F0WzeEkimurpnPqYHPXq4KL/wuZDgGQDU9dKxmSn52BIsUti32j8TYibuC/6+V5rFISZEydShgduGsUGRJnNa7phg1Q/GknyxG55j2K9JuJzDDCxpY3Es8x649wVnTxx3yan1IqJrwabKxWTCT8fiGwx5N3bbFnU3vczv4c0zdO6t4LQTflkqQOPKQDJo4ohPgfEfc+aEvlVQPVvBH3Y+o4/i3MagrSHlZ/mXMMAusTnyNv2cX72w7SuLHteGy+xlqUFtK5VZl8lNGu3t88X0hXr2AU7q+mvgRrpsGx9xsgLSx1JuSx/gyftFjm1wL5MX3LdVUYqfOkmCWr8m6dAvjwVJr+oNvE9UDdNq00u39f4qOptnHibKu+gVII34DDqYA1s8DEqaalp9STgT3BtKy1dR+UNTZODhqcC39m4x9b65Do4XFtCROKCR06g0ZlKg0Lx7TiUVYXEh73FU1k0Hxfh8=) 2026-01-05 00:24:45.613020 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAW4ZGla2JJgITUYyke8G4hhQwB+bqzJ7XTKRaCLe5mLYMQ1ZAFb+2tQ3yaEuI9w9L3mzLvBayhkpHmcf/8HKfA=) 2026-01-05 00:24:45.613039 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICvWlIiWTC0Fn4VxnP7egIkKvoY+JPxC3kPw4hpvZkxC) 2026-01-05 00:24:45.613057 | orchestrator | 2026-01-05 00:24:45.613069 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-05 00:24:45.613081 | orchestrator | Monday 05 January 2026 00:24:35 +0000 (0:00:01.090) 0:00:14.320 ******** 2026-01-05 00:24:45.613093 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:24:45.613105 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:24:45.613115 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:24:45.613126 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:24:45.613137 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:24:45.613148 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:24:45.613158 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:24:45.613169 | orchestrator | 2026-01-05 00:24:45.613180 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-05 00:24:45.613222 | orchestrator | Monday 05 January 2026 00:24:41 +0000 (0:00:05.405) 0:00:19.725 ******** 2026-01-05 00:24:45.613271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:24:45.613285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:24:45.613297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:24:45.613308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:24:45.613322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:24:45.613336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:24:45.613348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:24:45.613361 | orchestrator | 2026-01-05 00:24:45.613373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:45.613396 | orchestrator | Monday 05 January 2026 00:24:41 +0000 (0:00:00.194) 0:00:19.920 ******** 2026-01-05 00:24:45.613410 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy013DiESaPktfQGrCa1ai7LeQdNg5QldIRLAkXP8hf) 2026-01-05 00:24:45.613451 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6UNAx/5rzaKbp7bjGOk+DcUZ/OIVgYELg+YnCBwwTKAEZApXKmRlKEXDiUX3I7DTLhpXbEOxMLgCgVE6GHR0HFuuYwcZ0j4JJCjm2nxsgsdF7urmdF8HIbs/O4+LkiaOiLsp8sS9Lm3ndUW9o4YTNjYqJedMVgnQcJYeIBsGxmmRFDXfc9ukFR6hM3FOiW6HDzsKHhh5AcoOynnhps/0mEgOKHsLKcmCaR4R5i7tHo+kEjGGdaH4ELIWUSFdIluDbO7OmKAPL/xrKJAI8a1bhf6b15I3A1Qw2UFsTr5FdRmxzSiBWsG/HOYT5T4ZsftrHnCjCmoutp0HoHk/Y5hMad6vySRYCrR8nCfoLL34WGA3sgvNmQKNXlgayhig0yue9oyid5lgLr5+WAH0J+9igAEKq/88KdwPldboA82AYnNfVDlB4qApmxrblC5Qys73ztci4l0Ycp9K9dwE2ihfdGGy4g5AVWU6N0IgawKT3CVCAoI0qCAwpHaRF4ovn3zE=) 2026-01-05 00:24:45.613466 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWYqPZ66xqs36B78ovI3nm7FWGQoXMod8gwRJLUhTJdxCV6SGAPa4VJ75RYAPIF61KHEVvfNF+rWYnNn3/+i24=) 2026-01-05 00:24:45.613479 | orchestrator | 2026-01-05 00:24:45.613491 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:45.613506 | orchestrator | Monday 05 January 2026 00:24:42 +0000 (0:00:01.088) 0:00:21.009 ******** 2026-01-05 00:24:45.613519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGidapKLPCrXvK/GOsm4fYxpdaqcERb9EK2NS77lx72jMIG24leXFhBvURJOdSUsM4C0Car8tJhJnlljH000HcA=) 2026-01-05 00:24:45.613532 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVFR0dp+Tiw6sgGqoWU6Lmh+Lu0cLkeSYKYUO7xBsjYVYwnHsC6qBnQdLM2EgDhd5nz0oG/1wdR/oK+9R3ZGFYjj7T58ubA5+bwVXqe3dHEob93scLshnmb5LOnebQ5A3N4Ev6Ym5yop4haP67wf80Nzmai00gPH9rM8MakpZ40ExlOilgtcNAAMin2ZkikxKhf58Tmw0ZnIMrL6miaSaX9Ld2ZRGdVisegX5BqqHT6CI/Ja69DlltHDaTbBV0rPPKPGlx+FL0RCyM6rlZ4pob7MNyVTbvyWU0LSg+pl22f381pr8mQYumDDfiqbSPkvZ52igJTYnR0AsalhDVpKx+hjY0ssxZywE7i/28Fe/lOKgCfwNUr65PEwLr7tRi263T2Z3EhwL5JNkzzt5JEE8boaEqnHhQgJgdAPglL4wikd5/ubTxmy+eNYQaWOAJDoALlKuljNv4PYPN8qsnWtNWd67f/bimU458uOpnqyNw5VMTQgegXvvvMdMtiw525uE=) 2026-01-05 00:24:45.613553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEhJe41Pqol4JE39DZOPqRkXbA6FsFt2C4ZbSuQBb6aG) 2026-01-05 00:24:45.613567 | orchestrator | 2026-01-05 00:24:45.613580 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:45.613637 | orchestrator | Monday 05 January 2026 00:24:43 +0000 (0:00:01.067) 0:00:22.077 ******** 2026-01-05 00:24:45.613652 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8bm9Mw9mBuIjNFjmCEsgQ1T7ctsgtYxOEbDAFrMpxkAC/QRHHegdEclG0GTXSQMqU+uMTuBJC6hnWzI+eF8p8AHFOby2HapK7lanav/u4UMgn1o+3yFcqgMVmdDTICJ7K/11GXUxlk+J4VTobYibV8zuy6EM0KSQpaXEeeCiOyLkbQ8KYW7Xe0gB2msKTUAKUYRIHUff74MjQqvRB9tBJCU5gIFPxe4q6/zRLupE0OEhv8ed11jpBhxQHh3+LYGq9x55A72Qe23Kxe/wuKeUTEiih4DNZlgwp2QwRB0NrG7p2Ej04j8O2F+PO2ytYOPrPsWHQGJOITPn1Qy7m21Y2RmrEtq0nBo5iOdqQVrGbRbQ+sHvniDWi32AXA6SOUopRG7AxdbA14MDjBNvdSX4vvq0SmMUJglASQ5c3exUkHhklbhiktOerjKGF9xXYjvc7NDLEmE92sQqTjfx7jtnXy3XmcseOSMlHia2cOiwNIy0nHpDkLODXwY+sZDcHznM=) 2026-01-05 00:24:45.613666 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOe92Fm37cP66ZcdJcrB1ZPdf0NTKyZoNwP3DO7DanPRm12wxrAAqgvT+0ZIrghodjnnDZnW2HAan/iwgzWt9Bg=) 2026-01-05 00:24:45.613680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEV/sFdIpt4FG/JADiibbQth4ySAscV2ljWKkl8zh7HC) 2026-01-05 00:24:45.613692 | orchestrator | 2026-01-05 00:24:45.613703 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:45.613714 | orchestrator | Monday 05 January 2026 00:24:44 +0000 (0:00:01.061) 0:00:23.138 ******** 2026-01-05 00:24:45.613725 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsnKzkpSE9L5FuS96OxT/RppMXfVq6MRMPOiq/V41KtQFSLNHU+r3TMZzmA53TAFV/5WixHMZKmmBoxu4lxtwQJaQsClg1CVYLDXpjRDH/G3PKKKJIl96xI8yJAAoKxyc6MUUjjTOmnAayjfgU55CT1UcOXFeNsYMQJqISmlSDG9b3o8i8lforFTXKTWnchf1Qinvo81C6HO42k0giSKVLLsc8xgJvtvlPm5VA+7cCyamC+nQ/l+B9Bf45eO6Gj2ofzDj10s5LTRv319+uuvSwDPXsqWZJCWgcDtj5vjdlJBciKMuL6z26eDTYzkGalWwNo3nJ0EPt7VJZjDhCGww4q6nOc0EQQ2XWdcqSTqoZcdSPSceCPhOKJvKCNagHDydPhNBVndgXm8JEBxHTH6ErW63nxfHOhLa5EUUso+ykKFMvH8ew/8dRoBTpmDbh48TVBMlO18Bh/foH+fCxA3MJHejdP09A2GlqajRG+GNbfMbJhrDGKEgO/kOu69xZIhk=) 2026-01-05 00:24:45.613736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ5LYaY0qwEQo8sGwTHRFJB9xC85J7d1w8KdoxWnGOGKAQWZC/B20MsTZpUe5XuwbJpIKkkIictNhMbuI4lgWvs=) 2026-01-05 00:24:45.613759 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtw0tBObKsXI/BMoqCTTVXgwkwTW2FpGVUrunpDVQ2/) 2026-01-05 00:24:50.282817 | orchestrator | 2026-01-05 00:24:50.282923 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:50.282931 | orchestrator | Monday 05 January 2026 00:24:45 +0000 (0:00:01.141) 0:00:24.280 ******** 2026-01-05 00:24:50.282936 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDZ0hZ26ITNObcuYZePj99grjP6DPaym07ES+uCB6RIn) 2026-01-05 00:24:50.282946 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc0vrpZdTm9/mzVl+YtRvu8u2ER6WEoQ1okZqUyS7orKApGozEdN4qaBjovUReMUbU685B2orCfHC0rBGJBU2NlaqjBaQuVNHpNepE/b+84huzQGYD+zqI5urHb8JyZwJsya/Wu9n/QMCNRhVrew7xT2cSW8dytZWdrotng8+pJrYjRU7BLv2WBUxTV9GI9GCmQhlGgV5MRpK+KCxYdqtaiewn2uOjpmOCYIgrFOafKIHYkvh+CHecoPelXrLPNmpS9whuDYe1D9T7IXuBiRS8Xm6Z9PozN0AlowcpEHbmUjm+zd1wfMJp7qZRi9l52ESHH881qWrVIfP1lOzSfxBGiyrxlGiHO1/sSR8YFVs/cWiOHHkjrpjrh7/9O865WipFKWWeNNPA89wtiO+pq6YALDpZmgAc3WOVs/x16jbQ26gH1tsPyjZXoEDnYzJ5GvcW+QIpqaNi03e8cEBFrFCkJEnLpn6l3Zp36WUaDl/vIs4UCOhmbM4SxXysixef978=) 2026-01-05 00:24:50.282954 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaFYflCPegxkJB9lHVfzAUd4wQJ32zWSIZfi6mbW8C4sAG18zyW5Wj1U3Oo0Z9w1m2joaQsw1Hlyn4WGCdOCf8=) 2026-01-05 00:24:50.282976 | orchestrator | 2026-01-05 00:24:50.282981 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:50.282985 | orchestrator | Monday 05 January 2026 00:24:46 +0000 (0:00:01.157) 0:00:25.438 ******** 2026-01-05 00:24:50.282988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB7nlSKDS+mdTyyi1FMKfnS1p1Z+TEw0fiI8dDoLqrXwzi+IZCmauY/iWZqTJTySlhnBS7/1jYMKnMMVvho1dS4=) 2026-01-05 00:24:50.282993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC45s/9IsXyinZgsh2BJ4kpH0aV3PwhGsMqTcvmf6+xb9oIRbjARHTu8s8LT+SYOigs0NBcI8So0dO6WIta507B5N/j/2jVUPEMTBTpvYs0TdW+GjiD5M1rfEPBLbQKSw4bwWPZS73we8k0zDBmxPwmy4hf3g73/GO56T37HBynmvj1j0l8mB1vBCZcJG1z0S29adUHK5dj26Mz/6yyd9cKyzca2nIOURzcnUX1Eq4Da0mS8a/rxJtgy00myGurclnby9FAEdioGFXERlo6kT5z2G/v1+D5Ekum5Tl4aK/vMQ80TxrJzIYVBi7zYUJylHIWc5fEcNyKKgeecHusmcB8wkty2gRB1hE8JOVFaEGZNj7+zjauGBzs9GIolbMO6So/QkNKF4VNBPf+KFQEa2FN4LwltWkhCjNxeCkB4TO9xWpGOItqB60NtlXfNhX8HpiVPU5Mvhp+ICj+rCnrRhsjq7VfwXV8R4u/XF5Gf0a30I+pmdiC2CZ3NIhAocuDIc=) 2026-01-05 00:24:50.282997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtnNj+Kr1IKpV3cDO5JBdWPa3j1DlHDQADAu3oJ28Qx) 2026-01-05 00:24:50.283001 | orchestrator | 2026-01-05 00:24:50.283005 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:24:50.283009 | orchestrator | Monday 05 January 2026 00:24:47 +0000 (0:00:01.106) 0:00:26.545 ******** 2026-01-05 00:24:50.283012 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKDnzDYojxOp+2n7MulzBbG4m/5XZR1ni3vtfLI/F0WzeEkimurpnPqYHPXq4KL/wuZDgGQDU9dKxmSn52BIsUti32j8TYibuC/6+V5rFISZEydShgduGsUGRJnNa7phg1Q/GknyxG55j2K9JuJzDDCxpY3Es8x649wVnTxx3yan1IqJrwabKxWTCT8fiGwx5N3bbFnU3vczv4c0zdO6t4LQTflkqQOPKQDJo4ohPgfEfc+aEvlVQPVvBH3Y+o4/i3MagrSHlZ/mXMMAusTnyNv2cX72w7SuLHteGy+xlqUFtK5VZl8lNGu3t88X0hXr2AU7q+mvgRrpsGx9xsgLSx1JuSx/gyftFjm1wL5MX3LdVUYqfOkmCWr8m6dAvjwVJr+oNvE9UDdNq00u39f4qOptnHibKu+gVII34DDqYA1s8DEqaalp9STgT3BtKy1dR+UNTZODhqcC39m4x9b65Do4XFtCROKCR06g0ZlKg0Lx7TiUVYXEh73FU1k0Hxfh8=) 2026-01-05 00:24:50.283017 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAW4ZGla2JJgITUYyke8G4hhQwB+bqzJ7XTKRaCLe5mLYMQ1ZAFb+2tQ3yaEuI9w9L3mzLvBayhkpHmcf/8HKfA=) 2026-01-05 00:24:50.283021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICvWlIiWTC0Fn4VxnP7egIkKvoY+JPxC3kPw4hpvZkxC) 2026-01-05 00:24:50.283024 | orchestrator | 2026-01-05 00:24:50.283028 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-05 00:24:50.283032 | orchestrator | Monday 05 January 2026 00:24:49 +0000 (0:00:01.144) 0:00:27.689 ******** 2026-01-05 00:24:50.283036 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:24:50.283041 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:24:50.283045 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:24:50.283049 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:24:50.283053 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:24:50.283056 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:24:50.283060 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:24:50.283064 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:24:50.283068 | orchestrator | 2026-01-05 00:24:50.283084 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-05 00:24:50.283088 | orchestrator | Monday 05 January 2026 00:24:49 +0000 (0:00:00.179) 0:00:27.868 ******** 2026-01-05 00:24:50.283092 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:24:50.283095 | orchestrator | 2026-01-05 00:24:50.283103 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-05 00:24:50.283107 | orchestrator | Monday 05 January 2026 00:24:49 +0000 (0:00:00.070) 0:00:27.939 ******** 2026-01-05 00:24:50.283111 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:24:50.283115 | orchestrator | 2026-01-05 00:24:50.283119 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-05 00:24:50.283123 | orchestrator | Monday 05 January 2026 00:24:49 +0000 (0:00:00.058) 0:00:27.997 ******** 2026-01-05 00:24:50.283126 | orchestrator | changed: [testbed-manager] 2026-01-05 00:24:50.283130 | orchestrator | 2026-01-05 00:24:50.283134 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:24:50.283138 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:24:50.283144 | orchestrator | 2026-01-05 00:24:50.283148 | orchestrator | 2026-01-05 00:24:50.283152 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:24:50.283155 | orchestrator | Monday 05 January 2026 00:24:50 +0000 (0:00:00.732) 0:00:28.730 ******** 2026-01-05 00:24:50.283159 | orchestrator | =============================================================================== 2026-01-05 00:24:50.283163 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.13s 2026-01-05 00:24:50.283167 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.41s 2026-01-05 00:24:50.283171 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-01-05 00:24:50.283175 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-01-05 00:24:50.283179 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:24:50.283183 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:24:50.283187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-05 00:24:50.283190 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-05 00:24:50.283194 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-05 00:24:50.283198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-05 00:24:50.283202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-05 00:24:50.283205 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-05 00:24:50.283209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-05 00:24:50.283213 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-05 00:24:50.283217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-05 00:24:50.283221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-05 00:24:50.283224 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2026-01-05 00:24:50.283231 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-01-05 00:24:50.283235 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-01-05 00:24:50.283240 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-01-05 00:24:50.637851 | orchestrator | + osism apply squid 2026-01-05 00:25:02.750741 | orchestrator | 2026-01-05 00:25:02 | INFO  | Task d100a3db-96f0-4896-bd8a-d1d2eb7e80c6 (squid) was prepared for execution. 2026-01-05 00:25:02.750856 | orchestrator | 2026-01-05 00:25:02 | INFO  | It takes a moment until task d100a3db-96f0-4896-bd8a-d1d2eb7e80c6 (squid) has been started and output is visible here. 2026-01-05 00:27:01.930495 | orchestrator | 2026-01-05 00:27:01.930596 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-05 00:27:01.930631 | orchestrator | 2026-01-05 00:27:01.930640 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-05 00:27:01.930646 | orchestrator | Monday 05 January 2026 00:25:07 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-01-05 00:27:01.930652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:27:01.930659 | orchestrator | 2026-01-05 00:27:01.930665 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-05 00:27:01.930671 | orchestrator | Monday 05 January 2026 00:25:07 +0000 (0:00:00.100) 0:00:00.265 ******** 2026-01-05 00:27:01.930676 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:01.930683 | orchestrator | 2026-01-05 00:27:01.930689 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-05 00:27:01.930730 | orchestrator | Monday 05 January 2026 00:25:08 +0000 (0:00:01.546) 0:00:01.811 ******** 2026-01-05 00:27:01.930737 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-05 00:27:01.930743 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-05 00:27:01.930749 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-05 00:27:01.930754 | orchestrator | 2026-01-05 00:27:01.930760 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-05 00:27:01.930766 | orchestrator | Monday 05 January 2026 00:25:09 +0000 (0:00:01.208) 0:00:03.019 ******** 2026-01-05 00:27:01.930771 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-05 00:27:01.930777 | orchestrator | 2026-01-05 00:27:01.930783 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-05 00:27:01.930788 | orchestrator | Monday 05 January 2026 00:25:10 +0000 (0:00:01.099) 0:00:04.118 ******** 2026-01-05 00:27:01.930794 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:01.930799 | orchestrator | 2026-01-05 00:27:01.930805 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-05 00:27:01.930825 | orchestrator | Monday 05 January 2026 00:25:11 +0000 (0:00:00.373) 0:00:04.491 ******** 2026-01-05 00:27:01.930831 | orchestrator | changed: [testbed-manager] 2026-01-05 00:27:01.930836 | orchestrator | 2026-01-05 00:27:01.930842 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-05 00:27:01.930847 | orchestrator | Monday 05 January 2026 00:25:12 +0000 (0:00:00.969) 0:00:05.461 ******** 2026-01-05 00:27:01.930853 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-05 00:27:01.930859 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:01.930864 | orchestrator | 2026-01-05 00:27:01.930870 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-05 00:27:01.930875 | orchestrator | Monday 05 January 2026 00:25:48 +0000 (0:00:36.554) 0:00:42.015 ******** 2026-01-05 00:27:01.930881 | orchestrator | changed: [testbed-manager] 2026-01-05 00:27:01.930886 | orchestrator | 2026-01-05 00:27:01.930892 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-05 00:27:01.930897 | orchestrator | Monday 05 January 2026 00:26:00 +0000 (0:00:11.998) 0:00:54.013 ******** 2026-01-05 00:27:01.930903 | orchestrator | Pausing for 60 seconds 2026-01-05 00:27:01.930908 | orchestrator | changed: [testbed-manager] 2026-01-05 00:27:01.930914 | orchestrator | 2026-01-05 00:27:01.930919 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-05 00:27:01.930925 | orchestrator | Monday 05 January 2026 00:27:00 +0000 (0:01:00.084) 0:01:54.098 ******** 2026-01-05 00:27:01.930930 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:01.930936 | orchestrator | 2026-01-05 00:27:01.930941 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-05 00:27:01.930946 | orchestrator | Monday 05 January 2026 00:27:01 +0000 (0:00:00.073) 0:01:54.171 ******** 2026-01-05 00:27:01.930952 | orchestrator | changed: [testbed-manager] 2026-01-05 00:27:01.930957 | orchestrator | 2026-01-05 00:27:01.930963 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:27:01.930975 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:27:01.930980 | orchestrator | 2026-01-05 00:27:01.930986 | orchestrator | 2026-01-05 00:27:01.930991 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:27:01.930996 | orchestrator | Monday 05 January 2026 00:27:01 +0000 (0:00:00.627) 0:01:54.799 ******** 2026-01-05 00:27:01.931002 | orchestrator | =============================================================================== 2026-01-05 00:27:01.931007 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-05 00:27:01.931013 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.55s 2026-01-05 00:27:01.931018 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-01-05 00:27:01.931023 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.55s 2026-01-05 00:27:01.931029 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2026-01-05 00:27:01.931034 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2026-01-05 00:27:01.931040 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2026-01-05 00:27:01.931045 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-01-05 00:27:01.931050 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-01-05 00:27:01.931056 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-01-05 00:27:01.931061 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-05 00:27:02.313645 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 00:27:02.313888 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-05 00:27:02.320157 | orchestrator | + set -e 2026-01-05 00:27:02.320214 | orchestrator | + NAMESPACE=kolla 2026-01-05 00:27:02.320229 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-05 00:27:02.325185 | orchestrator | ++ semver latest 9.0.0 2026-01-05 00:27:02.393741 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-05 00:27:02.393846 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 00:27:02.394471 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-05 00:27:14.586460 | orchestrator | 2026-01-05 00:27:14 | INFO  | Task 329ea9b5-e88a-4525-95b4-5bf9a8ca1c36 (operator) was prepared for execution. 2026-01-05 00:27:14.586602 | orchestrator | 2026-01-05 00:27:14 | INFO  | It takes a moment until task 329ea9b5-e88a-4525-95b4-5bf9a8ca1c36 (operator) has been started and output is visible here. 2026-01-05 00:27:31.294361 | orchestrator | 2026-01-05 00:27:31.294482 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-05 00:27:31.294499 | orchestrator | 2026-01-05 00:27:31.294511 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:27:31.294523 | orchestrator | Monday 05 January 2026 00:27:18 +0000 (0:00:00.142) 0:00:00.142 ******** 2026-01-05 00:27:31.294534 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:31.294546 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:31.294559 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:31.294570 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:31.294581 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:31.294592 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:31.294602 | orchestrator | 2026-01-05 00:27:31.294619 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-05 00:27:31.294631 | orchestrator | Monday 05 January 2026 00:27:22 +0000 (0:00:03.369) 0:00:03.512 ******** 2026-01-05 00:27:31.294641 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:31.294652 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:31.294663 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:31.294674 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:31.294684 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:31.294747 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:31.294760 | orchestrator | 2026-01-05 00:27:31.294771 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-05 00:27:31.294782 | orchestrator | 2026-01-05 00:27:31.294793 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:27:31.294804 | orchestrator | Monday 05 January 2026 00:27:23 +0000 (0:00:00.931) 0:00:04.443 ******** 2026-01-05 00:27:31.294814 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:31.294825 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:31.294836 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:31.294847 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:31.294857 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:31.294868 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:31.294878 | orchestrator | 2026-01-05 00:27:31.294889 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:27:31.294900 | orchestrator | Monday 05 January 2026 00:27:23 +0000 (0:00:00.194) 0:00:04.638 ******** 2026-01-05 00:27:31.294910 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:31.294921 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:31.294931 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:31.294942 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:31.294952 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:31.294963 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:31.294973 | orchestrator | 2026-01-05 00:27:31.294984 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:27:31.294995 | orchestrator | Monday 05 January 2026 00:27:23 +0000 (0:00:00.183) 0:00:04.822 ******** 2026-01-05 00:27:31.295005 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:31.295017 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:31.295028 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:31.295038 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:31.295049 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:31.295060 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:31.295070 | orchestrator | 2026-01-05 00:27:31.295081 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:27:31.295105 | orchestrator | Monday 05 January 2026 00:27:24 +0000 (0:00:00.796) 0:00:05.618 ******** 2026-01-05 00:27:31.295116 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:31.295127 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:31.295138 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:31.295148 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:31.295159 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:31.295169 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:31.295180 | orchestrator | 2026-01-05 00:27:31.295191 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:27:31.295201 | orchestrator | Monday 05 January 2026 00:27:25 +0000 (0:00:00.832) 0:00:06.450 ******** 2026-01-05 00:27:31.295212 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-05 00:27:31.295223 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-05 00:27:31.295234 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-05 00:27:31.295245 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-05 00:27:31.295255 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-05 00:27:31.295266 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-05 00:27:31.295277 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-05 00:27:31.295288 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-05 00:27:31.295298 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-05 00:27:31.295309 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-05 00:27:31.295320 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-05 00:27:31.295331 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-05 00:27:31.295341 | orchestrator | 2026-01-05 00:27:31.295352 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:27:31.295369 | orchestrator | Monday 05 January 2026 00:27:26 +0000 (0:00:01.314) 0:00:07.764 ******** 2026-01-05 00:27:31.295381 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:31.295391 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:31.295402 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:31.295417 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:31.295435 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:31.295454 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:31.295470 | orchestrator | 2026-01-05 00:27:31.295487 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:27:31.295505 | orchestrator | Monday 05 January 2026 00:27:27 +0000 (0:00:01.201) 0:00:08.966 ******** 2026-01-05 00:27:31.295521 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-05 00:27:31.295537 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-05 00:27:31.295553 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-05 00:27:31.295569 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:27:31.295608 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:27:31.295627 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:27:31.295645 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:27:31.295662 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:27:31.295680 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:27:31.295699 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-05 00:27:31.295793 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-05 00:27:31.295808 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-05 00:27:31.295819 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-05 00:27:31.295831 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-05 00:27:31.295855 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-05 00:27:31.295867 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:27:31.295878 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:27:31.295888 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:27:31.295907 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:27:31.295919 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:27:31.295929 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:27:31.295940 | orchestrator | 2026-01-05 00:27:31.295951 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:27:31.295962 | orchestrator | Monday 05 January 2026 00:27:28 +0000 (0:00:01.388) 0:00:10.355 ******** 2026-01-05 00:27:31.295973 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:31.295984 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:31.295994 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:31.296005 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:31.296015 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:31.296026 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:31.296036 | orchestrator | 2026-01-05 00:27:31.296047 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:27:31.296058 | orchestrator | Monday 05 January 2026 00:27:29 +0000 (0:00:00.174) 0:00:10.529 ******** 2026-01-05 00:27:31.296069 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:31.296079 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:31.296090 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:31.296100 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:31.296119 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:31.296130 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:31.296141 | orchestrator | 2026-01-05 00:27:31.296152 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:27:31.296163 | orchestrator | Monday 05 January 2026 00:27:29 +0000 (0:00:00.193) 0:00:10.723 ******** 2026-01-05 00:27:31.296174 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:31.296185 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:31.296196 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:31.296206 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:31.296217 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:31.296227 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:31.296238 | orchestrator | 2026-01-05 00:27:31.296249 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:27:31.296259 | orchestrator | Monday 05 January 2026 00:27:29 +0000 (0:00:00.591) 0:00:11.315 ******** 2026-01-05 00:27:31.296270 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:31.296281 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:31.296292 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:31.296302 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:31.296313 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:31.296323 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:31.296334 | orchestrator | 2026-01-05 00:27:31.296345 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:27:31.296355 | orchestrator | Monday 05 January 2026 00:27:30 +0000 (0:00:00.196) 0:00:11.511 ******** 2026-01-05 00:27:31.296366 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:27:31.296377 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:31.296388 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:27:31.296398 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:31.296409 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:27:31.296420 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:31.296430 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:27:31.296441 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:31.296451 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:27:31.296462 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:31.296473 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:27:31.296483 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:31.296494 | orchestrator | 2026-01-05 00:27:31.296505 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:27:31.296515 | orchestrator | Monday 05 January 2026 00:27:30 +0000 (0:00:00.753) 0:00:12.264 ******** 2026-01-05 00:27:31.296526 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:31.296537 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:31.296547 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:31.296558 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:31.296568 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:31.296579 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:31.296590 | orchestrator | 2026-01-05 00:27:31.296601 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:27:31.296611 | orchestrator | Monday 05 January 2026 00:27:31 +0000 (0:00:00.207) 0:00:12.472 ******** 2026-01-05 00:27:31.296622 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:31.296633 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:31.296644 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:31.296654 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:31.296674 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:32.823009 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:32.823131 | orchestrator | 2026-01-05 00:27:32.823149 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:27:32.823163 | orchestrator | Monday 05 January 2026 00:27:31 +0000 (0:00:00.178) 0:00:12.650 ******** 2026-01-05 00:27:32.823203 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:32.823215 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:32.823226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:32.823237 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:32.823247 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:32.823258 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:32.823269 | orchestrator | 2026-01-05 00:27:32.823281 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:27:32.823292 | orchestrator | Monday 05 January 2026 00:27:31 +0000 (0:00:00.182) 0:00:12.832 ******** 2026-01-05 00:27:32.823303 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:32.823314 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:32.823324 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:32.823335 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:32.823346 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:32.823356 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:32.823367 | orchestrator | 2026-01-05 00:27:32.823378 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:27:32.823389 | orchestrator | Monday 05 January 2026 00:27:32 +0000 (0:00:00.789) 0:00:13.622 ******** 2026-01-05 00:27:32.823400 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:32.823411 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:32.823421 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:32.823432 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:32.823443 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:32.823454 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:32.823464 | orchestrator | 2026-01-05 00:27:32.823475 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:27:32.823488 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:27:32.823500 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:27:32.823511 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:27:32.823522 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:27:32.823536 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:27:32.823570 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:27:32.823583 | orchestrator | 2026-01-05 00:27:32.823596 | orchestrator | 2026-01-05 00:27:32.823609 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:27:32.823623 | orchestrator | Monday 05 January 2026 00:27:32 +0000 (0:00:00.285) 0:00:13.907 ******** 2026-01-05 00:27:32.823636 | orchestrator | =============================================================================== 2026-01-05 00:27:32.823648 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2026-01-05 00:27:32.823661 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2026-01-05 00:27:32.823675 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.31s 2026-01-05 00:27:32.823686 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-01-05 00:27:32.823699 | orchestrator | Do not require tty for all users ---------------------------------------- 0.93s 2026-01-05 00:27:32.823711 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2026-01-05 00:27:32.823764 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.80s 2026-01-05 00:27:32.823784 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.79s 2026-01-05 00:27:32.823795 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-01-05 00:27:32.823806 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-01-05 00:27:32.823816 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-01-05 00:27:32.823827 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.21s 2026-01-05 00:27:32.823838 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-01-05 00:27:32.823849 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-01-05 00:27:32.823859 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-05 00:27:32.823870 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-01-05 00:27:32.823881 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-01-05 00:27:32.823892 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-01-05 00:27:32.823903 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-01-05 00:27:33.191541 | orchestrator | + osism apply --environment custom facts 2026-01-05 00:27:35.193845 | orchestrator | 2026-01-05 00:27:35 | INFO  | Trying to run play facts in environment custom 2026-01-05 00:27:45.333020 | orchestrator | 2026-01-05 00:27:45 | INFO  | Task 8bb1da86-4be5-44ef-aad2-ef21952060e5 (facts) was prepared for execution. 2026-01-05 00:27:45.333123 | orchestrator | 2026-01-05 00:27:45 | INFO  | It takes a moment until task 8bb1da86-4be5-44ef-aad2-ef21952060e5 (facts) has been started and output is visible here. 2026-01-05 00:28:30.990590 | orchestrator | 2026-01-05 00:28:30.990702 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-05 00:28:30.990717 | orchestrator | 2026-01-05 00:28:30.990726 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:28:30.990734 | orchestrator | Monday 05 January 2026 00:27:49 +0000 (0:00:00.085) 0:00:00.085 ******** 2026-01-05 00:28:30.990743 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:30.990752 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:30.990811 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:30.990821 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:30.990829 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:30.990856 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:30.990865 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:30.990873 | orchestrator | 2026-01-05 00:28:30.990881 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-05 00:28:30.990889 | orchestrator | Monday 05 January 2026 00:27:50 +0000 (0:00:01.463) 0:00:01.549 ******** 2026-01-05 00:28:30.990897 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:30.990905 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:30.990913 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:30.990921 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:30.990929 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:30.990936 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:30.990945 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:30.990954 | orchestrator | 2026-01-05 00:28:30.990961 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-05 00:28:30.990969 | orchestrator | 2026-01-05 00:28:30.990977 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:28:30.990985 | orchestrator | Monday 05 January 2026 00:27:52 +0000 (0:00:01.376) 0:00:02.925 ******** 2026-01-05 00:28:30.990993 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991001 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991009 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991040 | orchestrator | 2026-01-05 00:28:30.991048 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:28:30.991057 | orchestrator | Monday 05 January 2026 00:27:52 +0000 (0:00:00.112) 0:00:03.038 ******** 2026-01-05 00:28:30.991065 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991073 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991080 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991088 | orchestrator | 2026-01-05 00:28:30.991096 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:28:30.991106 | orchestrator | Monday 05 January 2026 00:27:52 +0000 (0:00:00.220) 0:00:03.258 ******** 2026-01-05 00:28:30.991115 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991125 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991135 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991144 | orchestrator | 2026-01-05 00:28:30.991154 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:28:30.991163 | orchestrator | Monday 05 January 2026 00:27:52 +0000 (0:00:00.224) 0:00:03.483 ******** 2026-01-05 00:28:30.991174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:28:30.991185 | orchestrator | 2026-01-05 00:28:30.991194 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:28:30.991204 | orchestrator | Monday 05 January 2026 00:27:53 +0000 (0:00:00.163) 0:00:03.647 ******** 2026-01-05 00:28:30.991214 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991223 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991231 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991241 | orchestrator | 2026-01-05 00:28:30.991250 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:28:30.991259 | orchestrator | Monday 05 January 2026 00:27:53 +0000 (0:00:00.463) 0:00:04.111 ******** 2026-01-05 00:28:30.991268 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:30.991277 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:30.991287 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:30.991308 | orchestrator | 2026-01-05 00:28:30.991318 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:28:30.991327 | orchestrator | Monday 05 January 2026 00:27:53 +0000 (0:00:00.137) 0:00:04.248 ******** 2026-01-05 00:28:30.991336 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:30.991345 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:30.991355 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:30.991364 | orchestrator | 2026-01-05 00:28:30.991373 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:28:30.991382 | orchestrator | Monday 05 January 2026 00:27:54 +0000 (0:00:01.185) 0:00:05.434 ******** 2026-01-05 00:28:30.991392 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991401 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991410 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991419 | orchestrator | 2026-01-05 00:28:30.991429 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:28:30.991439 | orchestrator | Monday 05 January 2026 00:27:55 +0000 (0:00:00.609) 0:00:06.044 ******** 2026-01-05 00:28:30.991449 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:30.991459 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:30.991467 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:30.991474 | orchestrator | 2026-01-05 00:28:30.991482 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:28:30.991490 | orchestrator | Monday 05 January 2026 00:27:56 +0000 (0:00:01.135) 0:00:07.179 ******** 2026-01-05 00:28:30.991498 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:30.991506 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:30.991522 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:30.991531 | orchestrator | 2026-01-05 00:28:30.991539 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-05 00:28:30.991553 | orchestrator | Monday 05 January 2026 00:28:13 +0000 (0:00:16.893) 0:00:24.072 ******** 2026-01-05 00:28:30.991561 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:30.991569 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:30.991577 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:30.991585 | orchestrator | 2026-01-05 00:28:30.991593 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-05 00:28:30.991615 | orchestrator | Monday 05 January 2026 00:28:13 +0000 (0:00:00.107) 0:00:24.180 ******** 2026-01-05 00:28:30.991623 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:30.991631 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:30.991639 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:30.991647 | orchestrator | 2026-01-05 00:28:30.991655 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:28:30.991662 | orchestrator | Monday 05 January 2026 00:28:21 +0000 (0:00:08.105) 0:00:32.285 ******** 2026-01-05 00:28:30.991670 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991678 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991686 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991694 | orchestrator | 2026-01-05 00:28:30.991701 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:28:30.991709 | orchestrator | Monday 05 January 2026 00:28:22 +0000 (0:00:00.459) 0:00:32.745 ******** 2026-01-05 00:28:30.991717 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-05 00:28:30.991725 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-05 00:28:30.991733 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-05 00:28:30.991741 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-05 00:28:30.991749 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-05 00:28:30.991788 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-05 00:28:30.991797 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-05 00:28:30.991805 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-05 00:28:30.991813 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-05 00:28:30.991821 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:28:30.991829 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:28:30.991837 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:28:30.991845 | orchestrator | 2026-01-05 00:28:30.991853 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:28:30.991860 | orchestrator | Monday 05 January 2026 00:28:25 +0000 (0:00:03.672) 0:00:36.417 ******** 2026-01-05 00:28:30.991868 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991876 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991884 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991892 | orchestrator | 2026-01-05 00:28:30.991899 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:28:30.991907 | orchestrator | 2026-01-05 00:28:30.991915 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:28:30.991923 | orchestrator | Monday 05 January 2026 00:28:27 +0000 (0:00:01.413) 0:00:37.831 ******** 2026-01-05 00:28:30.991931 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:30.991939 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:30.991947 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:30.991955 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:30.991963 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:30.991970 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:30.991978 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:30.991986 | orchestrator | 2026-01-05 00:28:30.991993 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:28:30.992008 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:28:30.992017 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:28:30.992026 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:28:30.992034 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:28:30.992080 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:28:30.992089 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:28:30.992097 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:28:30.992105 | orchestrator | 2026-01-05 00:28:30.992113 | orchestrator | 2026-01-05 00:28:30.992121 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:28:30.992129 | orchestrator | Monday 05 January 2026 00:28:30 +0000 (0:00:03.724) 0:00:41.556 ******** 2026-01-05 00:28:30.992137 | orchestrator | =============================================================================== 2026-01-05 00:28:30.992144 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.89s 2026-01-05 00:28:30.992152 | orchestrator | Install required packages (Debian) -------------------------------------- 8.11s 2026-01-05 00:28:30.992160 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.72s 2026-01-05 00:28:30.992168 | orchestrator | Copy fact files --------------------------------------------------------- 3.67s 2026-01-05 00:28:30.992176 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2026-01-05 00:28:30.992184 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.41s 2026-01-05 00:28:30.992197 | orchestrator | Copy fact file ---------------------------------------------------------- 1.38s 2026-01-05 00:28:31.264109 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.19s 2026-01-05 00:28:31.264246 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.14s 2026-01-05 00:28:31.264271 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.61s 2026-01-05 00:28:31.264290 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2026-01-05 00:28:31.264337 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-01-05 00:28:31.264350 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-05 00:28:31.264361 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-01-05 00:28:31.264373 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-05 00:28:31.264386 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-01-05 00:28:31.264405 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-01-05 00:28:31.264424 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-01-05 00:28:31.614582 | orchestrator | + osism apply bootstrap 2026-01-05 00:28:43.886621 | orchestrator | 2026-01-05 00:28:43 | INFO  | Task c33c7c8b-3ec1-4076-9d07-fc7f75b65ef8 (bootstrap) was prepared for execution. 2026-01-05 00:28:43.886755 | orchestrator | 2026-01-05 00:28:43 | INFO  | It takes a moment until task c33c7c8b-3ec1-4076-9d07-fc7f75b65ef8 (bootstrap) has been started and output is visible here. 2026-01-05 00:29:00.648586 | orchestrator | 2026-01-05 00:29:00.648775 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:29:00.648834 | orchestrator | 2026-01-05 00:29:00.648852 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:29:00.648870 | orchestrator | Monday 05 January 2026 00:28:48 +0000 (0:00:00.153) 0:00:00.153 ******** 2026-01-05 00:29:00.648887 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:00.648903 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:00.648913 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:00.648922 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:00.648932 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:00.648941 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:00.648951 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:00.648960 | orchestrator | 2026-01-05 00:29:00.648970 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:29:00.648980 | orchestrator | 2026-01-05 00:29:00.648990 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:29:00.648999 | orchestrator | Monday 05 January 2026 00:28:48 +0000 (0:00:00.256) 0:00:00.410 ******** 2026-01-05 00:29:00.649009 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:00.649019 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:00.649028 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:00.649038 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:00.649048 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:00.649059 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:00.649076 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:00.649089 | orchestrator | 2026-01-05 00:29:00.649100 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-05 00:29:00.649112 | orchestrator | 2026-01-05 00:29:00.649123 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:29:00.649136 | orchestrator | Monday 05 January 2026 00:28:52 +0000 (0:00:03.703) 0:00:04.113 ******** 2026-01-05 00:29:00.649149 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:29:00.649161 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:29:00.649172 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:29:00.649184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-05 00:29:00.649196 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:29:00.649207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:29:00.649218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-05 00:29:00.649228 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:29:00.649238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:29:00.649247 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-05 00:29:00.649257 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:29:00.649267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:29:00.649276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-05 00:29:00.649286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-05 00:29:00.649295 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:29:00.649305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-05 00:29:00.649315 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-05 00:29:00.649324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:29:00.649334 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:00.649344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-05 00:29:00.649353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:29:00.649363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-05 00:29:00.649372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:29:00.649393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-05 00:29:00.649403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-05 00:29:00.649413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:29:00.649422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:29:00.649432 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:00.649441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-05 00:29:00.649451 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-05 00:29:00.649461 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-05 00:29:00.649470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:29:00.649498 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-05 00:29:00.649516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:29:00.649532 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:00.649548 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:29:00.649566 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:29:00.649587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-05 00:29:00.649604 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-05 00:29:00.649620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:29:00.649636 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:00.649651 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:29:00.649667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-05 00:29:00.649684 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-05 00:29:00.649702 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:29:00.649719 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:00.649735 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:29:00.649769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-05 00:29:00.649809 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:29:00.649825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-05 00:29:00.649842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:29:00.649860 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:00.649876 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:29:00.649893 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:29:00.649909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:29:00.649926 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:00.649943 | orchestrator | 2026-01-05 00:29:00.649960 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-05 00:29:00.649977 | orchestrator | 2026-01-05 00:29:00.649993 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-05 00:29:00.650010 | orchestrator | Monday 05 January 2026 00:28:52 +0000 (0:00:00.535) 0:00:04.649 ******** 2026-01-05 00:29:00.650095 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:00.650112 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:00.650129 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:00.650145 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:00.650161 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:00.650178 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:00.650195 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:00.650211 | orchestrator | 2026-01-05 00:29:00.650228 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-05 00:29:00.650245 | orchestrator | Monday 05 January 2026 00:28:54 +0000 (0:00:01.313) 0:00:05.962 ******** 2026-01-05 00:29:00.650261 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:00.650277 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:00.650307 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:00.650324 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:00.650339 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:00.650356 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:00.650373 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:00.650389 | orchestrator | 2026-01-05 00:29:00.650406 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-05 00:29:00.650422 | orchestrator | Monday 05 January 2026 00:28:55 +0000 (0:00:01.347) 0:00:07.310 ******** 2026-01-05 00:29:00.650440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:00.650461 | orchestrator | 2026-01-05 00:29:00.650477 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-05 00:29:00.650494 | orchestrator | Monday 05 January 2026 00:28:55 +0000 (0:00:00.318) 0:00:07.628 ******** 2026-01-05 00:29:00.650510 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:00.650526 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:00.650542 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:00.650559 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:00.650575 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:00.650591 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:00.650608 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:00.650625 | orchestrator | 2026-01-05 00:29:00.650641 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-05 00:29:00.650657 | orchestrator | Monday 05 January 2026 00:28:57 +0000 (0:00:02.110) 0:00:09.738 ******** 2026-01-05 00:29:00.650668 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:00.650679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:00.650691 | orchestrator | 2026-01-05 00:29:00.650701 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-05 00:29:00.650711 | orchestrator | Monday 05 January 2026 00:28:58 +0000 (0:00:00.264) 0:00:10.003 ******** 2026-01-05 00:29:00.650721 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:00.650730 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:00.650740 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:00.650749 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:00.650759 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:00.650768 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:00.650829 | orchestrator | 2026-01-05 00:29:00.650842 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-05 00:29:00.650852 | orchestrator | Monday 05 January 2026 00:28:59 +0000 (0:00:01.075) 0:00:11.078 ******** 2026-01-05 00:29:00.650861 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:00.650871 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:00.650881 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:00.650890 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:00.650900 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:00.650909 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:00.650919 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:00.650928 | orchestrator | 2026-01-05 00:29:00.650938 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-05 00:29:00.650948 | orchestrator | Monday 05 January 2026 00:28:59 +0000 (0:00:00.665) 0:00:11.744 ******** 2026-01-05 00:29:00.650958 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:00.650967 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:00.650976 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:00.650986 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:00.650995 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:00.651014 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:00.651023 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:00.651033 | orchestrator | 2026-01-05 00:29:00.651042 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:29:00.651065 | orchestrator | Monday 05 January 2026 00:29:00 +0000 (0:00:00.495) 0:00:12.240 ******** 2026-01-05 00:29:00.651075 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:00.651084 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:00.651106 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:13.788219 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:13.788330 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:13.788342 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:13.788355 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:13.788373 | orchestrator | 2026-01-05 00:29:13.788392 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:29:13.788407 | orchestrator | Monday 05 January 2026 00:29:00 +0000 (0:00:00.250) 0:00:12.491 ******** 2026-01-05 00:29:13.788423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:13.788458 | orchestrator | 2026-01-05 00:29:13.788468 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:29:13.788477 | orchestrator | Monday 05 January 2026 00:29:01 +0000 (0:00:00.353) 0:00:12.844 ******** 2026-01-05 00:29:13.788485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:13.788494 | orchestrator | 2026-01-05 00:29:13.788502 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:29:13.788510 | orchestrator | Monday 05 January 2026 00:29:01 +0000 (0:00:00.377) 0:00:13.221 ******** 2026-01-05 00:29:13.788518 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.788527 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.788535 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.788543 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.788551 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.788559 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.788566 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.788574 | orchestrator | 2026-01-05 00:29:13.788582 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:29:13.788590 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:01.538) 0:00:14.760 ******** 2026-01-05 00:29:13.788598 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:13.788606 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:13.788614 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:13.788622 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:13.788630 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:13.788638 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:13.788646 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:13.788654 | orchestrator | 2026-01-05 00:29:13.788662 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:29:13.788670 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:00.241) 0:00:15.001 ******** 2026-01-05 00:29:13.788678 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.788686 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.788694 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.788702 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.788709 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.788717 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.788725 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.788734 | orchestrator | 2026-01-05 00:29:13.788744 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:29:13.788780 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:00.570) 0:00:15.572 ******** 2026-01-05 00:29:13.788818 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:13.788831 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:13.788842 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:13.788854 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:13.788866 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:13.788878 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:13.788891 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:13.788905 | orchestrator | 2026-01-05 00:29:13.788919 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:29:13.788934 | orchestrator | Monday 05 January 2026 00:29:04 +0000 (0:00:00.343) 0:00:15.915 ******** 2026-01-05 00:29:13.788946 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.788954 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:13.788961 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:13.788969 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:13.788977 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.788985 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.788992 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.789000 | orchestrator | 2026-01-05 00:29:13.789008 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:29:13.789027 | orchestrator | Monday 05 January 2026 00:29:04 +0000 (0:00:00.553) 0:00:16.469 ******** 2026-01-05 00:29:13.789035 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.789044 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:13.789058 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:13.789080 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.789094 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.789107 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:13.789119 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.789131 | orchestrator | 2026-01-05 00:29:13.789145 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:29:13.789158 | orchestrator | Monday 05 January 2026 00:29:05 +0000 (0:00:01.190) 0:00:17.659 ******** 2026-01-05 00:29:13.789172 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.789186 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.789199 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.789213 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.789228 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.789241 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.789254 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.789345 | orchestrator | 2026-01-05 00:29:13.789362 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:29:13.789377 | orchestrator | Monday 05 January 2026 00:29:07 +0000 (0:00:01.104) 0:00:18.764 ******** 2026-01-05 00:29:13.789419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:13.789435 | orchestrator | 2026-01-05 00:29:13.789448 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:29:13.789457 | orchestrator | Monday 05 January 2026 00:29:07 +0000 (0:00:00.306) 0:00:19.070 ******** 2026-01-05 00:29:13.789465 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:13.789473 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.789481 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:13.789489 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.789497 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:13.789505 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.789513 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:13.789520 | orchestrator | 2026-01-05 00:29:13.789528 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:29:13.789549 | orchestrator | Monday 05 January 2026 00:29:08 +0000 (0:00:01.576) 0:00:20.646 ******** 2026-01-05 00:29:13.789557 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.789565 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.789572 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.789580 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.789588 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.789595 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.789603 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.789610 | orchestrator | 2026-01-05 00:29:13.789618 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:29:13.789626 | orchestrator | Monday 05 January 2026 00:29:09 +0000 (0:00:00.217) 0:00:20.864 ******** 2026-01-05 00:29:13.789634 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.789642 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.789649 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.789657 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.789665 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.789673 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.789680 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.789688 | orchestrator | 2026-01-05 00:29:13.789696 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:29:13.789704 | orchestrator | Monday 05 January 2026 00:29:09 +0000 (0:00:00.255) 0:00:21.119 ******** 2026-01-05 00:29:13.789712 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.789719 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.789727 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.789735 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.789742 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.789750 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.789758 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.789765 | orchestrator | 2026-01-05 00:29:13.789773 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:29:13.789781 | orchestrator | Monday 05 January 2026 00:29:09 +0000 (0:00:00.257) 0:00:21.376 ******** 2026-01-05 00:29:13.789844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:13.789861 | orchestrator | 2026-01-05 00:29:13.789875 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:29:13.789889 | orchestrator | Monday 05 January 2026 00:29:09 +0000 (0:00:00.326) 0:00:21.703 ******** 2026-01-05 00:29:13.789902 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.789915 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.789923 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.789930 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.789938 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.789946 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.789953 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.789961 | orchestrator | 2026-01-05 00:29:13.789969 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:29:13.789977 | orchestrator | Monday 05 January 2026 00:29:10 +0000 (0:00:00.553) 0:00:22.257 ******** 2026-01-05 00:29:13.789985 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:13.789993 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:13.790001 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:13.790009 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:13.790075 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:13.790084 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:13.790091 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:13.790099 | orchestrator | 2026-01-05 00:29:13.790107 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:29:13.790114 | orchestrator | Monday 05 January 2026 00:29:10 +0000 (0:00:00.247) 0:00:22.504 ******** 2026-01-05 00:29:13.790122 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.790144 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.790152 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.790160 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.790172 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:13.790185 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:13.790198 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:13.790212 | orchestrator | 2026-01-05 00:29:13.790226 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:29:13.790240 | orchestrator | Monday 05 January 2026 00:29:11 +0000 (0:00:01.143) 0:00:23.648 ******** 2026-01-05 00:29:13.790248 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.790256 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.790264 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.790271 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.790279 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:13.790287 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:13.790294 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:13.790302 | orchestrator | 2026-01-05 00:29:13.790310 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:29:13.790318 | orchestrator | Monday 05 January 2026 00:29:12 +0000 (0:00:00.601) 0:00:24.249 ******** 2026-01-05 00:29:13.790326 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:13.790334 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:13.790341 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:13.790349 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:13.790366 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:57.400377 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:57.400547 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:57.400577 | orchestrator | 2026-01-05 00:29:57.400600 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:29:57.400622 | orchestrator | Monday 05 January 2026 00:29:13 +0000 (0:00:01.272) 0:00:25.521 ******** 2026-01-05 00:29:57.400642 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.400662 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.400680 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.400698 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:57.400715 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:57.400735 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:57.400754 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:57.400773 | orchestrator | 2026-01-05 00:29:57.400794 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-05 00:29:57.400812 | orchestrator | Monday 05 January 2026 00:29:31 +0000 (0:00:17.434) 0:00:42.957 ******** 2026-01-05 00:29:57.400865 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.400890 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.400911 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.400939 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.400962 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.400988 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.401012 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.401033 | orchestrator | 2026-01-05 00:29:57.401063 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-05 00:29:57.401084 | orchestrator | Monday 05 January 2026 00:29:31 +0000 (0:00:00.253) 0:00:43.210 ******** 2026-01-05 00:29:57.401105 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.401126 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.401146 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.401166 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.401186 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.401325 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.401352 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.401369 | orchestrator | 2026-01-05 00:29:57.401388 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-05 00:29:57.401408 | orchestrator | Monday 05 January 2026 00:29:31 +0000 (0:00:00.254) 0:00:43.465 ******** 2026-01-05 00:29:57.401463 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.401481 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.401498 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.401515 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.401530 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.401546 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.401610 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.401626 | orchestrator | 2026-01-05 00:29:57.401641 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-05 00:29:57.401658 | orchestrator | Monday 05 January 2026 00:29:31 +0000 (0:00:00.225) 0:00:43.691 ******** 2026-01-05 00:29:57.401680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:57.401700 | orchestrator | 2026-01-05 00:29:57.401719 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-05 00:29:57.401738 | orchestrator | Monday 05 January 2026 00:29:32 +0000 (0:00:00.337) 0:00:44.028 ******** 2026-01-05 00:29:57.401757 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.401775 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.401795 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.401813 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.401920 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.401941 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.401959 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.401978 | orchestrator | 2026-01-05 00:29:57.401997 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-05 00:29:57.402119 | orchestrator | Monday 05 January 2026 00:29:34 +0000 (0:00:01.808) 0:00:45.836 ******** 2026-01-05 00:29:57.402145 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:57.402165 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:57.402185 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:57.402206 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:57.402225 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:57.402243 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:57.402262 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:57.402283 | orchestrator | 2026-01-05 00:29:57.402304 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-05 00:29:57.402323 | orchestrator | Monday 05 January 2026 00:29:35 +0000 (0:00:01.187) 0:00:47.024 ******** 2026-01-05 00:29:57.402342 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.402360 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.402379 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.402397 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.402416 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.402434 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.402453 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.402471 | orchestrator | 2026-01-05 00:29:57.402488 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-05 00:29:57.402505 | orchestrator | Monday 05 January 2026 00:29:36 +0000 (0:00:00.863) 0:00:47.887 ******** 2026-01-05 00:29:57.402553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:57.402572 | orchestrator | 2026-01-05 00:29:57.402589 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-05 00:29:57.402633 | orchestrator | Monday 05 January 2026 00:29:36 +0000 (0:00:00.332) 0:00:48.220 ******** 2026-01-05 00:29:57.402651 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:57.402669 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:57.402686 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:57.402704 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:57.402737 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:57.402776 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:57.402795 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:57.402813 | orchestrator | 2026-01-05 00:29:57.402950 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-05 00:29:57.402972 | orchestrator | Monday 05 January 2026 00:29:37 +0000 (0:00:01.084) 0:00:49.304 ******** 2026-01-05 00:29:57.402991 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:29:57.403011 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:29:57.403030 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:29:57.403051 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:29:57.403071 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:29:57.403088 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:29:57.403104 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:29:57.403121 | orchestrator | 2026-01-05 00:29:57.403138 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-05 00:29:57.403156 | orchestrator | Monday 05 January 2026 00:29:37 +0000 (0:00:00.243) 0:00:49.548 ******** 2026-01-05 00:29:57.403174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:57.403191 | orchestrator | 2026-01-05 00:29:57.403210 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-05 00:29:57.403227 | orchestrator | Monday 05 January 2026 00:29:38 +0000 (0:00:00.340) 0:00:49.888 ******** 2026-01-05 00:29:57.403246 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.403263 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.403280 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.403296 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.403312 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.403329 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.403345 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.403362 | orchestrator | 2026-01-05 00:29:57.403455 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-05 00:29:57.403474 | orchestrator | Monday 05 January 2026 00:29:40 +0000 (0:00:02.153) 0:00:52.042 ******** 2026-01-05 00:29:57.403491 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:57.403507 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:57.403524 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:57.403540 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:57.403556 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:57.403572 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:57.403589 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:57.403604 | orchestrator | 2026-01-05 00:29:57.403621 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-05 00:29:57.403637 | orchestrator | Monday 05 January 2026 00:29:41 +0000 (0:00:01.144) 0:00:53.186 ******** 2026-01-05 00:29:57.403653 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:29:57.403670 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:29:57.403684 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:29:57.403700 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:29:57.403715 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:29:57.403733 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:29:57.403749 | orchestrator | changed: [testbed-manager] 2026-01-05 00:29:57.403766 | orchestrator | 2026-01-05 00:29:57.403783 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-05 00:29:57.403798 | orchestrator | Monday 05 January 2026 00:29:54 +0000 (0:00:13.375) 0:01:06.561 ******** 2026-01-05 00:29:57.403838 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.403854 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.403870 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.403885 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.403902 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.403918 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.403953 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.403968 | orchestrator | 2026-01-05 00:29:57.403985 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-05 00:29:57.404002 | orchestrator | Monday 05 January 2026 00:29:55 +0000 (0:00:00.843) 0:01:07.405 ******** 2026-01-05 00:29:57.404018 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.404034 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.404051 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.404067 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.404084 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.404100 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.404117 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.404133 | orchestrator | 2026-01-05 00:29:57.404150 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-05 00:29:57.404166 | orchestrator | Monday 05 January 2026 00:29:56 +0000 (0:00:00.972) 0:01:08.378 ******** 2026-01-05 00:29:57.404183 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.404199 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.404214 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.404231 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.404247 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.404263 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.404292 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.404309 | orchestrator | 2026-01-05 00:29:57.404325 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-05 00:29:57.404342 | orchestrator | Monday 05 January 2026 00:29:56 +0000 (0:00:00.228) 0:01:08.606 ******** 2026-01-05 00:29:57.404358 | orchestrator | ok: [testbed-manager] 2026-01-05 00:29:57.404373 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:29:57.404411 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:29:57.404439 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:29:57.404456 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:29:57.404472 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:29:57.404487 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:29:57.404503 | orchestrator | 2026-01-05 00:29:57.404518 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-05 00:29:57.404534 | orchestrator | Monday 05 January 2026 00:29:57 +0000 (0:00:00.240) 0:01:08.847 ******** 2026-01-05 00:29:57.404552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:29:57.404569 | orchestrator | 2026-01-05 00:29:57.404606 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-05 00:32:20.546434 | orchestrator | Monday 05 January 2026 00:29:57 +0000 (0:00:00.287) 0:01:09.134 ******** 2026-01-05 00:32:20.546560 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:20.546578 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.546591 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.546602 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.546613 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.546625 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.546636 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.546648 | orchestrator | 2026-01-05 00:32:20.546660 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-05 00:32:20.546672 | orchestrator | Monday 05 January 2026 00:29:59 +0000 (0:00:02.129) 0:01:11.264 ******** 2026-01-05 00:32:20.546683 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:20.546695 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:20.546706 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:20.546717 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:20.546728 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:20.546739 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:20.546750 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:20.546761 | orchestrator | 2026-01-05 00:32:20.546798 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-05 00:32:20.546810 | orchestrator | Monday 05 January 2026 00:30:00 +0000 (0:00:00.598) 0:01:11.862 ******** 2026-01-05 00:32:20.546821 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:20.546832 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.546843 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.546854 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.546865 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.546876 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.546886 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.546924 | orchestrator | 2026-01-05 00:32:20.546935 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-05 00:32:20.546946 | orchestrator | Monday 05 January 2026 00:30:00 +0000 (0:00:00.230) 0:01:12.093 ******** 2026-01-05 00:32:20.546959 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:20.546972 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.546984 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.546997 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.547009 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.547022 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.547035 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.547047 | orchestrator | 2026-01-05 00:32:20.547061 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-05 00:32:20.547074 | orchestrator | Monday 05 January 2026 00:30:01 +0000 (0:00:01.443) 0:01:13.537 ******** 2026-01-05 00:32:20.547087 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:20.547100 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:20.547113 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:20.547125 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:20.547138 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:20.547151 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:20.547164 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:20.547177 | orchestrator | 2026-01-05 00:32:20.547190 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-05 00:32:20.547203 | orchestrator | Monday 05 January 2026 00:30:04 +0000 (0:00:02.495) 0:01:16.032 ******** 2026-01-05 00:32:20.547217 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:20.547230 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.547243 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.547256 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.547269 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.547282 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.547295 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.547308 | orchestrator | 2026-01-05 00:32:20.547322 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-05 00:32:20.547333 | orchestrator | Monday 05 January 2026 00:30:07 +0000 (0:00:03.160) 0:01:19.192 ******** 2026-01-05 00:32:20.547345 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:20.547356 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.547367 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.547378 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.547388 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.547399 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.547411 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.547422 | orchestrator | 2026-01-05 00:32:20.547433 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-05 00:32:20.547444 | orchestrator | Monday 05 January 2026 00:30:47 +0000 (0:00:40.190) 0:01:59.383 ******** 2026-01-05 00:32:20.547455 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:20.547466 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:20.547477 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:20.547488 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:20.547499 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:20.547509 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:20.547520 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:20.547541 | orchestrator | 2026-01-05 00:32:20.547571 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-05 00:32:20.547583 | orchestrator | Monday 05 January 2026 00:32:03 +0000 (0:01:15.747) 0:03:15.131 ******** 2026-01-05 00:32:20.547594 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:20.547604 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.547616 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.547626 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.547637 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.547647 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.547658 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.547669 | orchestrator | 2026-01-05 00:32:20.547680 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-05 00:32:20.547691 | orchestrator | Monday 05 January 2026 00:32:05 +0000 (0:00:01.778) 0:03:16.910 ******** 2026-01-05 00:32:20.547702 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:20.547713 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:20.547723 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:20.547734 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:20.547745 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:20.547756 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:20.547766 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:20.547777 | orchestrator | 2026-01-05 00:32:20.547788 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-05 00:32:20.547799 | orchestrator | Monday 05 January 2026 00:32:18 +0000 (0:00:13.096) 0:03:30.007 ******** 2026-01-05 00:32:20.547842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-05 00:32:20.547867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-05 00:32:20.547882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-05 00:32:20.547933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:32:20.547945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:32:20.547956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-05 00:32:20.547976 | orchestrator | 2026-01-05 00:32:20.547988 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-05 00:32:20.547999 | orchestrator | Monday 05 January 2026 00:32:18 +0000 (0:00:00.421) 0:03:30.428 ******** 2026-01-05 00:32:20.548015 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:32:20.548027 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:20.548038 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:32:20.548049 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:20.548060 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:32:20.548071 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:20.548082 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:32:20.548092 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:20.548104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:32:20.548115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:32:20.548126 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:32:20.548137 | orchestrator | 2026-01-05 00:32:20.548148 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-05 00:32:20.548159 | orchestrator | Monday 05 January 2026 00:32:20 +0000 (0:00:01.771) 0:03:32.200 ******** 2026-01-05 00:32:20.548169 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:32:20.548187 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:32:20.548206 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:32:20.548226 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:32:20.548245 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:32:20.548276 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:32:27.543640 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:32:27.543765 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:32:27.543780 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:32:27.543792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:32:27.543826 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:32:27.543838 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:32:27.543849 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:32:27.543860 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:32:27.543871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:32:27.543882 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:32:27.543948 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:32:27.543960 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:32:27.543973 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:32:27.544009 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:32:27.544021 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:32:27.544032 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:32:27.544043 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:32:27.544054 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:32:27.544065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:32:27.544095 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:27.544107 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:32:27.544129 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:32:27.544141 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:32:27.544152 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:32:27.544163 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:32:27.544174 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:32:27.544184 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:32:27.544195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:32:27.544206 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:32:27.544217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:32:27.544228 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:32:27.544239 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:32:27.544256 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:32:27.544267 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:27.544284 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:32:27.544303 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:32:27.544321 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:27.544340 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:27.544359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:32:27.544377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:32:27.544394 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:32:27.544412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:32:27.544431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:32:27.544476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:32:27.544498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:32:27.544517 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:32:27.544548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:32:27.544568 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:32:27.544588 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:32:27.544606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:32:27.544626 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:32:27.544645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:32:27.544665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:32:27.544684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:32:27.544702 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:32:27.544720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:32:27.544738 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:32:27.544757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:32:27.544776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:32:27.544795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:32:27.544814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:32:27.544832 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:32:27.544851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:32:27.544862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:32:27.544873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:32:27.544884 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:32:27.544944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:32:27.544956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:32:27.544967 | orchestrator | 2026-01-05 00:32:27.544979 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-05 00:32:27.544990 | orchestrator | Monday 05 January 2026 00:32:25 +0000 (0:00:04.905) 0:03:37.105 ******** 2026-01-05 00:32:27.545000 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545011 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545022 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545033 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545054 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:32:27.545076 | orchestrator | 2026-01-05 00:32:27.545095 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-05 00:32:27.545106 | orchestrator | Monday 05 January 2026 00:32:27 +0000 (0:00:01.670) 0:03:38.776 ******** 2026-01-05 00:32:27.545117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:27.545138 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:27.545150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:27.545161 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:27.545172 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:27.545182 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:27.545193 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:27.545204 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:27.545215 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:32:27.545226 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:32:27.545248 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:32:42.297756 | orchestrator | 2026-01-05 00:32:42.298921 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-05 00:32:42.298986 | orchestrator | Monday 05 January 2026 00:32:27 +0000 (0:00:00.506) 0:03:39.282 ******** 2026-01-05 00:32:42.298997 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:42.299006 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:42.299014 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:42.299023 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:42.299029 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:42.299037 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:32:42.299044 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:42.299051 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:42.299058 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:32:42.299064 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:32:42.299071 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:32:42.299078 | orchestrator | 2026-01-05 00:32:42.299085 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-05 00:32:42.299092 | orchestrator | Monday 05 January 2026 00:32:28 +0000 (0:00:00.654) 0:03:39.938 ******** 2026-01-05 00:32:42.299099 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:32:42.299106 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:42.299112 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:32:42.299119 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:42.299126 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:32:42.299132 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:42.299139 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:32:42.299146 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:42.299153 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:32:42.299160 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:32:42.299167 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:32:42.299199 | orchestrator | 2026-01-05 00:32:42.299206 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-05 00:32:42.299213 | orchestrator | Monday 05 January 2026 00:32:28 +0000 (0:00:00.597) 0:03:40.535 ******** 2026-01-05 00:32:42.299220 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:42.299226 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:42.299233 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:42.299239 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:42.299246 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:42.299253 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:42.299260 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:42.299266 | orchestrator | 2026-01-05 00:32:42.299273 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-05 00:32:42.299280 | orchestrator | Monday 05 January 2026 00:32:29 +0000 (0:00:00.324) 0:03:40.859 ******** 2026-01-05 00:32:42.299287 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:42.299294 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:42.299301 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:42.299307 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:42.299314 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:42.299320 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:42.299327 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:42.299333 | orchestrator | 2026-01-05 00:32:42.299354 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-05 00:32:42.299361 | orchestrator | Monday 05 January 2026 00:32:35 +0000 (0:00:05.922) 0:03:46.783 ******** 2026-01-05 00:32:42.299369 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-05 00:32:42.299376 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:42.299382 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-05 00:32:42.299389 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:42.299395 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-05 00:32:42.299402 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-05 00:32:42.299409 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:42.299415 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-05 00:32:42.299422 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:42.299429 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-05 00:32:42.299435 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:42.299442 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:42.299448 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-05 00:32:42.299455 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:42.299462 | orchestrator | 2026-01-05 00:32:42.299469 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-05 00:32:42.299476 | orchestrator | Monday 05 January 2026 00:32:35 +0000 (0:00:00.458) 0:03:47.241 ******** 2026-01-05 00:32:42.299483 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-05 00:32:42.299490 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-05 00:32:42.299497 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-05 00:32:42.299527 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-05 00:32:42.299535 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-05 00:32:42.299542 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-05 00:32:42.299548 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-05 00:32:42.299555 | orchestrator | 2026-01-05 00:32:42.299562 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-05 00:32:42.299568 | orchestrator | Monday 05 January 2026 00:32:36 +0000 (0:00:01.407) 0:03:48.648 ******** 2026-01-05 00:32:42.299578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:42.299587 | orchestrator | 2026-01-05 00:32:42.299600 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-05 00:32:42.299607 | orchestrator | Monday 05 January 2026 00:32:37 +0000 (0:00:00.562) 0:03:49.211 ******** 2026-01-05 00:32:42.299614 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:42.299621 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:42.299628 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:42.299634 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:42.299641 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:42.299648 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:42.299654 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:42.299661 | orchestrator | 2026-01-05 00:32:42.299668 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-05 00:32:42.299675 | orchestrator | Monday 05 January 2026 00:32:39 +0000 (0:00:01.727) 0:03:50.939 ******** 2026-01-05 00:32:42.299681 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:42.299688 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:42.299694 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:42.299701 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:42.299708 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:42.299714 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:42.299721 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:42.299727 | orchestrator | 2026-01-05 00:32:42.299735 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-05 00:32:42.299745 | orchestrator | Monday 05 January 2026 00:32:39 +0000 (0:00:00.671) 0:03:51.611 ******** 2026-01-05 00:32:42.299755 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:42.299767 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:42.299777 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:42.299788 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:42.299798 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:42.299808 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:42.299815 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:42.299822 | orchestrator | 2026-01-05 00:32:42.299829 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-05 00:32:42.299835 | orchestrator | Monday 05 January 2026 00:32:40 +0000 (0:00:00.661) 0:03:52.272 ******** 2026-01-05 00:32:42.299842 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:42.299848 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:42.299855 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:42.299862 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:42.299869 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:42.299875 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:42.299882 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:42.299906 | orchestrator | 2026-01-05 00:32:42.299914 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-05 00:32:42.299922 | orchestrator | Monday 05 January 2026 00:32:41 +0000 (0:00:00.671) 0:03:52.943 ******** 2026-01-05 00:32:42.299932 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571489.09297, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:42.299949 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571512.6459315, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:42.299963 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571503.4683347, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:42.299988 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571519.3379407, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477568 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571508.0018315, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477708 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571512.6439915, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477727 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571515.4830635, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477740 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477774 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477810 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477822 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477867 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477880 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477927 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:32:47.477948 | orchestrator | 2026-01-05 00:32:47.477961 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-05 00:32:47.477975 | orchestrator | Monday 05 January 2026 00:32:42 +0000 (0:00:01.087) 0:03:54.030 ******** 2026-01-05 00:32:47.477986 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:47.477998 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:47.478009 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:47.478078 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:47.478092 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:47.478104 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:47.478117 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:47.478130 | orchestrator | 2026-01-05 00:32:47.478144 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-05 00:32:47.478158 | orchestrator | Monday 05 January 2026 00:32:43 +0000 (0:00:01.157) 0:03:55.188 ******** 2026-01-05 00:32:47.478170 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:47.478183 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:47.478208 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:47.478220 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:47.478233 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:47.478246 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:47.478259 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:47.478271 | orchestrator | 2026-01-05 00:32:47.478284 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-05 00:32:47.478295 | orchestrator | Monday 05 January 2026 00:32:44 +0000 (0:00:01.229) 0:03:56.417 ******** 2026-01-05 00:32:47.478306 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:47.478323 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:47.478334 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:47.478345 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:47.478356 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:47.478366 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:47.478377 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:47.478387 | orchestrator | 2026-01-05 00:32:47.478398 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-05 00:32:47.478409 | orchestrator | Monday 05 January 2026 00:32:45 +0000 (0:00:01.262) 0:03:57.680 ******** 2026-01-05 00:32:47.478420 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:47.478431 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:47.478441 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:47.478452 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:47.478463 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:47.478474 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:47.478484 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:47.478495 | orchestrator | 2026-01-05 00:32:47.478506 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-05 00:32:47.478517 | orchestrator | Monday 05 January 2026 00:32:46 +0000 (0:00:00.339) 0:03:58.019 ******** 2026-01-05 00:32:47.478527 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:47.478539 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:47.478550 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:47.478561 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:47.478571 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:47.478582 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:47.478593 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:47.478603 | orchestrator | 2026-01-05 00:32:47.478614 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-05 00:32:47.478625 | orchestrator | Monday 05 January 2026 00:32:47 +0000 (0:00:00.769) 0:03:58.789 ******** 2026-01-05 00:32:47.478637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:47.478650 | orchestrator | 2026-01-05 00:32:47.478661 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-05 00:32:47.478681 | orchestrator | Monday 05 January 2026 00:32:47 +0000 (0:00:00.427) 0:03:59.216 ******** 2026-01-05 00:34:05.943012 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943146 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:05.943163 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:05.943175 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:05.943187 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:05.943198 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:05.943209 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:05.943221 | orchestrator | 2026-01-05 00:34:05.943234 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-05 00:34:05.943247 | orchestrator | Monday 05 January 2026 00:32:55 +0000 (0:00:08.440) 0:04:07.657 ******** 2026-01-05 00:34:05.943258 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.943270 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943282 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.943319 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.943331 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.943342 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.943353 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.943364 | orchestrator | 2026-01-05 00:34:05.943375 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-05 00:34:05.943386 | orchestrator | Monday 05 January 2026 00:32:57 +0000 (0:00:01.294) 0:04:08.951 ******** 2026-01-05 00:34:05.943397 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943408 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.943419 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.943430 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.943441 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.943452 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.943465 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.943478 | orchestrator | 2026-01-05 00:34:05.943491 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-05 00:34:05.943505 | orchestrator | Monday 05 January 2026 00:32:58 +0000 (0:00:01.249) 0:04:10.201 ******** 2026-01-05 00:34:05.943520 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943532 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.943545 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.943558 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.943570 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.943584 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.943598 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.943610 | orchestrator | 2026-01-05 00:34:05.943624 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-05 00:34:05.943639 | orchestrator | Monday 05 January 2026 00:32:58 +0000 (0:00:00.316) 0:04:10.518 ******** 2026-01-05 00:34:05.943652 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943664 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.943677 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.943690 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.943703 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.943717 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.943730 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.943743 | orchestrator | 2026-01-05 00:34:05.943757 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-05 00:34:05.943770 | orchestrator | Monday 05 January 2026 00:32:59 +0000 (0:00:00.348) 0:04:10.867 ******** 2026-01-05 00:34:05.943784 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943797 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.943810 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.943821 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.943831 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.943869 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.943881 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.943892 | orchestrator | 2026-01-05 00:34:05.943903 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-05 00:34:05.943914 | orchestrator | Monday 05 January 2026 00:32:59 +0000 (0:00:00.318) 0:04:11.185 ******** 2026-01-05 00:34:05.943926 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.943937 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.943948 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.943978 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.943990 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.944001 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.944012 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.944023 | orchestrator | 2026-01-05 00:34:05.944034 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-05 00:34:05.944045 | orchestrator | Monday 05 January 2026 00:33:05 +0000 (0:00:06.460) 0:04:17.646 ******** 2026-01-05 00:34:05.944058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:05.944079 | orchestrator | 2026-01-05 00:34:05.944090 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-05 00:34:05.944102 | orchestrator | Monday 05 January 2026 00:33:06 +0000 (0:00:00.457) 0:04:18.104 ******** 2026-01-05 00:34:05.944113 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944124 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-05 00:34:05.944135 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944146 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-05 00:34:05.944157 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:05.944168 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944178 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-05 00:34:05.944189 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:05.944200 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944211 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:05.944222 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-05 00:34:05.944233 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944244 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-05 00:34:05.944255 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:05.944266 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:05.944276 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944306 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-05 00:34:05.944318 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:05.944330 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-05 00:34:05.944341 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-05 00:34:05.944352 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:05.944363 | orchestrator | 2026-01-05 00:34:05.944374 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-05 00:34:05.944384 | orchestrator | Monday 05 January 2026 00:33:06 +0000 (0:00:00.347) 0:04:18.452 ******** 2026-01-05 00:34:05.944396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:05.944408 | orchestrator | 2026-01-05 00:34:05.944419 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-05 00:34:05.944430 | orchestrator | Monday 05 January 2026 00:33:07 +0000 (0:00:00.417) 0:04:18.869 ******** 2026-01-05 00:34:05.944441 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-05 00:34:05.944452 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-05 00:34:05.944462 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:05.944473 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:05.944484 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-05 00:34:05.944495 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-05 00:34:05.944506 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:05.944516 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-05 00:34:05.944527 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:05.944538 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-05 00:34:05.944548 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:05.944559 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:05.944570 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-05 00:34:05.944581 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:05.944591 | orchestrator | 2026-01-05 00:34:05.944609 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-05 00:34:05.944620 | orchestrator | Monday 05 January 2026 00:33:07 +0000 (0:00:00.387) 0:04:19.257 ******** 2026-01-05 00:34:05.944631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:05.944643 | orchestrator | 2026-01-05 00:34:05.945793 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-05 00:34:05.945825 | orchestrator | Monday 05 January 2026 00:33:07 +0000 (0:00:00.450) 0:04:19.708 ******** 2026-01-05 00:34:05.945871 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:05.945892 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:05.945911 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:05.945930 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:05.945948 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:05.945964 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:05.945975 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:05.945985 | orchestrator | 2026-01-05 00:34:05.945996 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-05 00:34:05.946008 | orchestrator | Monday 05 January 2026 00:33:42 +0000 (0:00:34.426) 0:04:54.135 ******** 2026-01-05 00:34:05.946087 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:05.946101 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:05.946112 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:05.946123 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:05.946134 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:05.946145 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:05.946155 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:05.946166 | orchestrator | 2026-01-05 00:34:05.946177 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-05 00:34:05.946188 | orchestrator | Monday 05 January 2026 00:33:50 +0000 (0:00:07.947) 0:05:02.082 ******** 2026-01-05 00:34:05.946199 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:05.946210 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:05.946221 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:05.946231 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:05.946242 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:05.946253 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:05.946263 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:05.946274 | orchestrator | 2026-01-05 00:34:05.946285 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-05 00:34:05.946314 | orchestrator | Monday 05 January 2026 00:33:58 +0000 (0:00:07.712) 0:05:09.795 ******** 2026-01-05 00:34:05.946325 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:05.946337 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:05.946348 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:05.946359 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:05.946369 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:05.946380 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:05.946391 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:05.946401 | orchestrator | 2026-01-05 00:34:05.946412 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-05 00:34:05.946423 | orchestrator | Monday 05 January 2026 00:33:59 +0000 (0:00:01.670) 0:05:11.466 ******** 2026-01-05 00:34:05.946434 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:05.946445 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:05.946456 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:05.946466 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:05.946477 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:05.946488 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:05.946499 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:05.946510 | orchestrator | 2026-01-05 00:34:05.946541 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-05 00:34:17.792284 | orchestrator | Monday 05 January 2026 00:34:05 +0000 (0:00:06.205) 0:05:17.672 ******** 2026-01-05 00:34:17.792425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:17.792445 | orchestrator | 2026-01-05 00:34:17.792459 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-05 00:34:17.792470 | orchestrator | Monday 05 January 2026 00:34:06 +0000 (0:00:00.463) 0:05:18.135 ******** 2026-01-05 00:34:17.792482 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:17.792494 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:17.792506 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:17.792517 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:17.792527 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:17.792538 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:17.792549 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:17.792560 | orchestrator | 2026-01-05 00:34:17.792571 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-05 00:34:17.792582 | orchestrator | Monday 05 January 2026 00:34:07 +0000 (0:00:00.751) 0:05:18.887 ******** 2026-01-05 00:34:17.792593 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:17.792606 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:17.792617 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:17.792628 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:17.792644 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:17.792663 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:17.792706 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:17.792724 | orchestrator | 2026-01-05 00:34:17.792742 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-05 00:34:17.792760 | orchestrator | Monday 05 January 2026 00:34:08 +0000 (0:00:01.682) 0:05:20.569 ******** 2026-01-05 00:34:17.792777 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:17.792796 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:17.792815 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:17.792859 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:17.792879 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:17.792899 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:17.792950 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:17.792965 | orchestrator | 2026-01-05 00:34:17.792978 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-05 00:34:17.792991 | orchestrator | Monday 05 January 2026 00:34:09 +0000 (0:00:00.767) 0:05:21.337 ******** 2026-01-05 00:34:17.793004 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:17.793017 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:17.793030 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:17.793043 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:17.793055 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:17.793069 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:17.793082 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:17.793094 | orchestrator | 2026-01-05 00:34:17.793107 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-05 00:34:17.793121 | orchestrator | Monday 05 January 2026 00:34:09 +0000 (0:00:00.345) 0:05:21.683 ******** 2026-01-05 00:34:17.793134 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:17.793146 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:17.793160 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:17.793173 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:17.793185 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:17.793195 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:17.793206 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:17.793217 | orchestrator | 2026-01-05 00:34:17.793248 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-05 00:34:17.793287 | orchestrator | Monday 05 January 2026 00:34:10 +0000 (0:00:00.466) 0:05:22.149 ******** 2026-01-05 00:34:17.793298 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:17.793309 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:17.793320 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:17.793330 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:17.793341 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:17.793352 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:17.793362 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:17.793373 | orchestrator | 2026-01-05 00:34:17.793384 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-05 00:34:17.793395 | orchestrator | Monday 05 January 2026 00:34:10 +0000 (0:00:00.292) 0:05:22.441 ******** 2026-01-05 00:34:17.793405 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:17.793416 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:17.793427 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:17.793438 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:17.793448 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:17.793459 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:17.793469 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:17.793480 | orchestrator | 2026-01-05 00:34:17.793491 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-05 00:34:17.793503 | orchestrator | Monday 05 January 2026 00:34:11 +0000 (0:00:00.328) 0:05:22.770 ******** 2026-01-05 00:34:17.793513 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:17.793524 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:17.793535 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:17.793546 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:17.793556 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:17.793567 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:17.793577 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:17.793588 | orchestrator | 2026-01-05 00:34:17.793598 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-05 00:34:17.793609 | orchestrator | Monday 05 January 2026 00:34:11 +0000 (0:00:00.348) 0:05:23.118 ******** 2026-01-05 00:34:17.793620 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:34:17.793631 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793641 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:34:17.793652 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793663 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:34:17.793674 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793685 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:34:17.793695 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793727 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:34:17.793739 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793750 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:34:17.793760 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793771 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:34:17.793781 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:34:17.793792 | orchestrator | 2026-01-05 00:34:17.793803 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-05 00:34:17.793814 | orchestrator | Monday 05 January 2026 00:34:11 +0000 (0:00:00.319) 0:05:23.438 ******** 2026-01-05 00:34:17.793867 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:34:17.793879 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.793890 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:34:17.793901 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.793912 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:34:17.793922 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.793933 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:34:17.793943 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.793954 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:34:17.793964 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.793975 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:34:17.793994 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.794005 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:34:17.794086 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:34:17.794101 | orchestrator | 2026-01-05 00:34:17.794112 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-05 00:34:17.794122 | orchestrator | Monday 05 January 2026 00:34:12 +0000 (0:00:00.318) 0:05:23.757 ******** 2026-01-05 00:34:17.794133 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:17.794144 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:17.794154 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:17.794165 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:17.794176 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:17.794186 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:17.794197 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:17.794207 | orchestrator | 2026-01-05 00:34:17.794218 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-05 00:34:17.794229 | orchestrator | Monday 05 January 2026 00:34:12 +0000 (0:00:00.299) 0:05:24.056 ******** 2026-01-05 00:34:17.794240 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:17.794250 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:17.794261 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:17.794271 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:17.794282 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:17.794293 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:17.794303 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:17.794314 | orchestrator | 2026-01-05 00:34:17.794325 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-05 00:34:17.794335 | orchestrator | Monday 05 January 2026 00:34:12 +0000 (0:00:00.326) 0:05:24.382 ******** 2026-01-05 00:34:17.794348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:17.794361 | orchestrator | 2026-01-05 00:34:17.794372 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-05 00:34:17.794383 | orchestrator | Monday 05 January 2026 00:34:13 +0000 (0:00:00.445) 0:05:24.828 ******** 2026-01-05 00:34:17.794394 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:17.794404 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:17.794422 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:17.794433 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:17.794444 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:17.794455 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:17.794465 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:17.794476 | orchestrator | 2026-01-05 00:34:17.794487 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-05 00:34:17.794498 | orchestrator | Monday 05 January 2026 00:34:14 +0000 (0:00:01.055) 0:05:25.884 ******** 2026-01-05 00:34:17.794509 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:17.794519 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:17.794530 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:17.794540 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:17.794551 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:17.794562 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:17.794572 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:17.794583 | orchestrator | 2026-01-05 00:34:17.794594 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-05 00:34:17.794606 | orchestrator | Monday 05 January 2026 00:34:17 +0000 (0:00:03.169) 0:05:29.053 ******** 2026-01-05 00:34:17.794617 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-05 00:34:17.794629 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-05 00:34:17.794640 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-05 00:34:17.794658 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-05 00:34:17.794669 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-05 00:34:17.794680 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-05 00:34:17.794691 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:17.794701 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-05 00:34:17.794712 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-05 00:34:17.794723 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-05 00:34:17.794733 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:17.794744 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-05 00:34:17.794755 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-05 00:34:17.794766 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:17.794777 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-05 00:34:17.794788 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-05 00:34:17.794808 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-05 00:35:20.824052 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-05 00:35:20.824193 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:20.824211 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-05 00:35:20.824225 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-05 00:35:20.824245 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-05 00:35:20.824262 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:20.824282 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:20.824300 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-05 00:35:20.824312 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-05 00:35:20.824323 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-05 00:35:20.824335 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:20.824346 | orchestrator | 2026-01-05 00:35:20.824359 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-05 00:35:20.824372 | orchestrator | Monday 05 January 2026 00:34:18 +0000 (0:00:00.708) 0:05:29.762 ******** 2026-01-05 00:35:20.824383 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.824395 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.824406 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.824417 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.824428 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.824438 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.824449 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.824460 | orchestrator | 2026-01-05 00:35:20.824471 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-05 00:35:20.824483 | orchestrator | Monday 05 January 2026 00:34:24 +0000 (0:00:06.700) 0:05:36.463 ******** 2026-01-05 00:35:20.824494 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.824505 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.824516 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.824531 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.824551 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.824571 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.824591 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.824609 | orchestrator | 2026-01-05 00:35:20.824623 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-05 00:35:20.824636 | orchestrator | Monday 05 January 2026 00:34:25 +0000 (0:00:01.156) 0:05:37.620 ******** 2026-01-05 00:35:20.824650 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.824663 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.824676 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.824688 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.824702 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.824742 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.824755 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.824767 | orchestrator | 2026-01-05 00:35:20.824780 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-05 00:35:20.824794 | orchestrator | Monday 05 January 2026 00:34:34 +0000 (0:00:08.642) 0:05:46.262 ******** 2026-01-05 00:35:20.824836 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.824850 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.824862 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:20.824876 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.824889 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.824901 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.824912 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.824922 | orchestrator | 2026-01-05 00:35:20.824933 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-05 00:35:20.824944 | orchestrator | Monday 05 January 2026 00:34:37 +0000 (0:00:03.376) 0:05:49.639 ******** 2026-01-05 00:35:20.824971 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.824982 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.824993 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.825004 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.825014 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.825025 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.825036 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.825046 | orchestrator | 2026-01-05 00:35:20.825057 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-05 00:35:20.825068 | orchestrator | Monday 05 January 2026 00:34:39 +0000 (0:00:01.393) 0:05:51.032 ******** 2026-01-05 00:35:20.825079 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.825090 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.825101 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.825111 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.825122 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.825132 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.825143 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.825154 | orchestrator | 2026-01-05 00:35:20.825165 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-05 00:35:20.825175 | orchestrator | Monday 05 January 2026 00:34:40 +0000 (0:00:01.678) 0:05:52.711 ******** 2026-01-05 00:35:20.825186 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:20.825197 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:20.825207 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:20.825218 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:20.825229 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:20.825240 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:20.825251 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:20.825261 | orchestrator | 2026-01-05 00:35:20.825272 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-05 00:35:20.825283 | orchestrator | Monday 05 January 2026 00:34:41 +0000 (0:00:00.679) 0:05:53.391 ******** 2026-01-05 00:35:20.825294 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.825305 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.825316 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.825326 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.825337 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.825354 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.825372 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.825388 | orchestrator | 2026-01-05 00:35:20.825405 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-05 00:35:20.825446 | orchestrator | Monday 05 January 2026 00:34:51 +0000 (0:00:09.727) 0:06:03.119 ******** 2026-01-05 00:35:20.825468 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:20.825487 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.825503 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.825525 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.825536 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.825547 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.825557 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.825568 | orchestrator | 2026-01-05 00:35:20.825579 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-05 00:35:20.825590 | orchestrator | Monday 05 January 2026 00:34:52 +0000 (0:00:00.998) 0:06:04.118 ******** 2026-01-05 00:35:20.825601 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.825612 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.825622 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.825633 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.825643 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.825654 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.825665 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.825675 | orchestrator | 2026-01-05 00:35:20.825686 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-05 00:35:20.825697 | orchestrator | Monday 05 January 2026 00:35:02 +0000 (0:00:09.851) 0:06:13.969 ******** 2026-01-05 00:35:20.825708 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.825719 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.825729 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.825740 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.825751 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.825761 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.825772 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.825783 | orchestrator | 2026-01-05 00:35:20.825793 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-05 00:35:20.825832 | orchestrator | Monday 05 January 2026 00:35:13 +0000 (0:00:11.503) 0:06:25.473 ******** 2026-01-05 00:35:20.825844 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-05 00:35:20.825855 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-05 00:35:20.825866 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-05 00:35:20.825877 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-05 00:35:20.825888 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-05 00:35:20.825899 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-05 00:35:20.825910 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-05 00:35:20.825921 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-05 00:35:20.825932 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-05 00:35:20.825943 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-05 00:35:20.825954 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-05 00:35:20.825965 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-05 00:35:20.825976 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-05 00:35:20.825986 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-05 00:35:20.825997 | orchestrator | 2026-01-05 00:35:20.826008 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-05 00:35:20.826078 | orchestrator | Monday 05 January 2026 00:35:15 +0000 (0:00:01.356) 0:06:26.829 ******** 2026-01-05 00:35:20.826090 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:20.826101 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:20.826112 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:20.826122 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:20.826133 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:20.826144 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:20.826155 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:20.826165 | orchestrator | 2026-01-05 00:35:20.826184 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-05 00:35:20.826199 | orchestrator | Monday 05 January 2026 00:35:15 +0000 (0:00:00.576) 0:06:27.405 ******** 2026-01-05 00:35:20.826219 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:20.826231 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:20.826242 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:20.826252 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:20.826263 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:20.826274 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:20.826284 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:20.826295 | orchestrator | 2026-01-05 00:35:20.826306 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-05 00:35:20.826319 | orchestrator | Monday 05 January 2026 00:35:19 +0000 (0:00:03.904) 0:06:31.310 ******** 2026-01-05 00:35:20.826330 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:20.826341 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:20.826352 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:20.826362 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:20.826373 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:20.826384 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:20.826395 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:20.826405 | orchestrator | 2026-01-05 00:35:20.826417 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-05 00:35:20.826429 | orchestrator | Monday 05 January 2026 00:35:20 +0000 (0:00:00.704) 0:06:32.014 ******** 2026-01-05 00:35:20.826440 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-05 00:35:20.826451 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-05 00:35:20.826462 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:20.826473 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-05 00:35:20.826484 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-05 00:35:20.826495 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:20.826505 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-05 00:35:20.826516 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-05 00:35:20.826527 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:20.826547 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-05 00:35:41.529574 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-05 00:35:41.529714 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:41.529729 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-05 00:35:41.529741 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-05 00:35:41.529753 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:41.529764 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-05 00:35:41.529775 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-05 00:35:41.529853 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:41.529866 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-05 00:35:41.529877 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-05 00:35:41.529888 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:41.529900 | orchestrator | 2026-01-05 00:35:41.529913 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-05 00:35:41.529925 | orchestrator | Monday 05 January 2026 00:35:21 +0000 (0:00:00.851) 0:06:32.865 ******** 2026-01-05 00:35:41.529936 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:41.529947 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:41.529958 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:41.529969 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:41.529979 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:41.529990 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:41.530001 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:41.530012 | orchestrator | 2026-01-05 00:35:41.530125 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-05 00:35:41.530183 | orchestrator | Monday 05 January 2026 00:35:21 +0000 (0:00:00.563) 0:06:33.428 ******** 2026-01-05 00:35:41.530204 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:41.530225 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:41.530245 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:41.530264 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:41.530280 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:41.530292 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:41.530305 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:41.530319 | orchestrator | 2026-01-05 00:35:41.530332 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-05 00:35:41.530344 | orchestrator | Monday 05 January 2026 00:35:22 +0000 (0:00:00.566) 0:06:33.994 ******** 2026-01-05 00:35:41.530357 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:41.530370 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:41.530382 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:41.530446 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:41.530460 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:41.530472 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:41.530483 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:41.530494 | orchestrator | 2026-01-05 00:35:41.530505 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-05 00:35:41.530516 | orchestrator | Monday 05 January 2026 00:35:22 +0000 (0:00:00.602) 0:06:34.597 ******** 2026-01-05 00:35:41.530527 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.530538 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:41.530549 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:41.530560 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:41.530570 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:41.530581 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:41.530591 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:41.530602 | orchestrator | 2026-01-05 00:35:41.530613 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-05 00:35:41.530629 | orchestrator | Monday 05 January 2026 00:35:24 +0000 (0:00:02.071) 0:06:36.668 ******** 2026-01-05 00:35:41.530642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:41.530656 | orchestrator | 2026-01-05 00:35:41.530667 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-05 00:35:41.530678 | orchestrator | Monday 05 January 2026 00:35:25 +0000 (0:00:00.956) 0:06:37.625 ******** 2026-01-05 00:35:41.530689 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.530700 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:41.530710 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:41.530721 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:41.530732 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:41.530742 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:41.530753 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:41.530764 | orchestrator | 2026-01-05 00:35:41.530775 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-05 00:35:41.530829 | orchestrator | Monday 05 January 2026 00:35:26 +0000 (0:00:01.031) 0:06:38.656 ******** 2026-01-05 00:35:41.530843 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.530853 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:41.530864 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:41.530874 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:41.530885 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:41.530895 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:41.530906 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:41.530916 | orchestrator | 2026-01-05 00:35:41.530927 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-05 00:35:41.530961 | orchestrator | Monday 05 January 2026 00:35:27 +0000 (0:00:00.855) 0:06:39.512 ******** 2026-01-05 00:35:41.530972 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.530983 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:41.530993 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:41.531003 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:41.531014 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:41.531024 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:41.531035 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:41.531045 | orchestrator | 2026-01-05 00:35:41.531056 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-05 00:35:41.531089 | orchestrator | Monday 05 January 2026 00:35:29 +0000 (0:00:01.592) 0:06:41.105 ******** 2026-01-05 00:35:41.531101 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:41.531111 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:41.531122 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:41.531132 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:41.531143 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:41.531153 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:41.531164 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:41.531176 | orchestrator | 2026-01-05 00:35:41.531195 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-05 00:35:41.531215 | orchestrator | Monday 05 January 2026 00:35:30 +0000 (0:00:01.413) 0:06:42.518 ******** 2026-01-05 00:35:41.531236 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.531256 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:41.531276 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:41.531296 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:41.531311 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:41.531322 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:41.531332 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:41.531343 | orchestrator | 2026-01-05 00:35:41.531354 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-05 00:35:41.531365 | orchestrator | Monday 05 January 2026 00:35:32 +0000 (0:00:01.362) 0:06:43.881 ******** 2026-01-05 00:35:41.531376 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:41.531386 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:41.531397 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:41.531408 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:41.531418 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:41.531429 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:41.531440 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:41.531450 | orchestrator | 2026-01-05 00:35:41.531461 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-05 00:35:41.531472 | orchestrator | Monday 05 January 2026 00:35:33 +0000 (0:00:01.662) 0:06:45.543 ******** 2026-01-05 00:35:41.531483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:41.531495 | orchestrator | 2026-01-05 00:35:41.531505 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-05 00:35:41.531516 | orchestrator | Monday 05 January 2026 00:35:35 +0000 (0:00:01.216) 0:06:46.760 ******** 2026-01-05 00:35:41.531527 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.531538 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:41.531549 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:41.531559 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:41.531570 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:41.531581 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:41.531591 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:41.531602 | orchestrator | 2026-01-05 00:35:41.531613 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-05 00:35:41.531624 | orchestrator | Monday 05 January 2026 00:35:36 +0000 (0:00:01.392) 0:06:48.153 ******** 2026-01-05 00:35:41.531649 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.531660 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:41.531671 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:41.531681 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:41.531692 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:41.531702 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:41.531713 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:41.531724 | orchestrator | 2026-01-05 00:35:41.531735 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-05 00:35:41.531746 | orchestrator | Monday 05 January 2026 00:35:37 +0000 (0:00:01.206) 0:06:49.360 ******** 2026-01-05 00:35:41.531757 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.531767 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:41.531778 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:41.531818 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:41.531830 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:41.531840 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:41.531851 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:41.531862 | orchestrator | 2026-01-05 00:35:41.531873 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-05 00:35:41.531884 | orchestrator | Monday 05 January 2026 00:35:38 +0000 (0:00:01.193) 0:06:50.554 ******** 2026-01-05 00:35:41.531895 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:41.531905 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:41.531916 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:41.531926 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:41.531937 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:41.531948 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:41.531958 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:41.531969 | orchestrator | 2026-01-05 00:35:41.531979 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-05 00:35:41.531990 | orchestrator | Monday 05 January 2026 00:35:40 +0000 (0:00:01.430) 0:06:51.984 ******** 2026-01-05 00:35:41.532001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:41.532012 | orchestrator | 2026-01-05 00:35:41.532023 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:35:41.532033 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.968) 0:06:52.953 ******** 2026-01-05 00:35:41.532044 | orchestrator | 2026-01-05 00:35:41.532055 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:35:41.532066 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.041) 0:06:52.995 ******** 2026-01-05 00:35:41.532076 | orchestrator | 2026-01-05 00:35:41.532087 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:35:41.532098 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.040) 0:06:53.035 ******** 2026-01-05 00:35:41.532108 | orchestrator | 2026-01-05 00:35:41.532119 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:35:41.532138 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.047) 0:06:53.082 ******** 2026-01-05 00:36:08.460607 | orchestrator | 2026-01-05 00:36:08.460741 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:36:08.460810 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.039) 0:06:53.122 ******** 2026-01-05 00:36:08.460824 | orchestrator | 2026-01-05 00:36:08.460835 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:36:08.460847 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.039) 0:06:53.162 ******** 2026-01-05 00:36:08.460858 | orchestrator | 2026-01-05 00:36:08.460869 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:36:08.460880 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.048) 0:06:53.210 ******** 2026-01-05 00:36:08.460919 | orchestrator | 2026-01-05 00:36:08.460931 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:36:08.460942 | orchestrator | Monday 05 January 2026 00:35:41 +0000 (0:00:00.040) 0:06:53.250 ******** 2026-01-05 00:36:08.460953 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:08.460965 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:08.460976 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:08.460987 | orchestrator | 2026-01-05 00:36:08.460998 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-05 00:36:08.461009 | orchestrator | Monday 05 January 2026 00:35:42 +0000 (0:00:01.140) 0:06:54.391 ******** 2026-01-05 00:36:08.461021 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:08.461032 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:08.461043 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:08.461054 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:08.461065 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:08.461075 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:08.461086 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:08.461097 | orchestrator | 2026-01-05 00:36:08.461108 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-05 00:36:08.461122 | orchestrator | Monday 05 January 2026 00:35:44 +0000 (0:00:01.615) 0:06:56.006 ******** 2026-01-05 00:36:08.461135 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:08.461148 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:08.461161 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:08.461175 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:08.461187 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:08.461202 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:08.461221 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:08.461241 | orchestrator | 2026-01-05 00:36:08.461259 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-05 00:36:08.461279 | orchestrator | Monday 05 January 2026 00:35:45 +0000 (0:00:01.256) 0:06:57.263 ******** 2026-01-05 00:36:08.461296 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:08.461314 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:08.461332 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:08.461352 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:08.461387 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:08.461407 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:08.461426 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:08.461446 | orchestrator | 2026-01-05 00:36:08.461464 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-05 00:36:08.461482 | orchestrator | Monday 05 January 2026 00:35:47 +0000 (0:00:02.302) 0:06:59.566 ******** 2026-01-05 00:36:08.461501 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:08.461520 | orchestrator | 2026-01-05 00:36:08.461539 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-05 00:36:08.461557 | orchestrator | Monday 05 January 2026 00:35:47 +0000 (0:00:00.128) 0:06:59.694 ******** 2026-01-05 00:36:08.461590 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:08.461602 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:08.461613 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:08.461623 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:08.461634 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:08.461645 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:08.461656 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:08.461667 | orchestrator | 2026-01-05 00:36:08.461678 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-05 00:36:08.461690 | orchestrator | Monday 05 January 2026 00:35:48 +0000 (0:00:01.026) 0:07:00.721 ******** 2026-01-05 00:36:08.461701 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:08.461712 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:08.461723 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:08.461745 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:08.461778 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:08.461790 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:08.461801 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:08.461811 | orchestrator | 2026-01-05 00:36:08.461822 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-05 00:36:08.461833 | orchestrator | Monday 05 January 2026 00:35:49 +0000 (0:00:00.545) 0:07:01.267 ******** 2026-01-05 00:36:08.461845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:36:08.461859 | orchestrator | 2026-01-05 00:36:08.461870 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-05 00:36:08.461881 | orchestrator | Monday 05 January 2026 00:35:50 +0000 (0:00:01.150) 0:07:02.418 ******** 2026-01-05 00:36:08.461892 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:08.461903 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:08.461914 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:08.461924 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:08.461935 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:08.461946 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:08.461956 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:08.461967 | orchestrator | 2026-01-05 00:36:08.461978 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-05 00:36:08.461989 | orchestrator | Monday 05 January 2026 00:35:51 +0000 (0:00:00.879) 0:07:03.297 ******** 2026-01-05 00:36:08.462000 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-05 00:36:08.462096 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-05 00:36:08.462110 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-05 00:36:08.462121 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-05 00:36:08.462132 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-05 00:36:08.462143 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-05 00:36:08.462154 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-05 00:36:08.462165 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-05 00:36:08.462176 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-05 00:36:08.462187 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-05 00:36:08.462198 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-05 00:36:08.462209 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-05 00:36:08.462220 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-05 00:36:08.462231 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-05 00:36:08.462242 | orchestrator | 2026-01-05 00:36:08.462253 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-05 00:36:08.462264 | orchestrator | Monday 05 January 2026 00:35:54 +0000 (0:00:02.565) 0:07:05.863 ******** 2026-01-05 00:36:08.462275 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:08.462286 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:08.462296 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:08.462307 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:08.462318 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:08.462329 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:08.462340 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:08.462351 | orchestrator | 2026-01-05 00:36:08.462362 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-05 00:36:08.462373 | orchestrator | Monday 05 January 2026 00:35:55 +0000 (0:00:00.945) 0:07:06.808 ******** 2026-01-05 00:36:08.462386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:36:08.462407 | orchestrator | 2026-01-05 00:36:08.462419 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-05 00:36:08.462430 | orchestrator | Monday 05 January 2026 00:35:56 +0000 (0:00:01.003) 0:07:07.812 ******** 2026-01-05 00:36:08.462441 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:08.462452 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:08.462463 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:08.462474 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:08.462485 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:08.462496 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:08.462507 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:08.462517 | orchestrator | 2026-01-05 00:36:08.462528 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-05 00:36:08.462540 | orchestrator | Monday 05 January 2026 00:35:57 +0000 (0:00:00.961) 0:07:08.774 ******** 2026-01-05 00:36:08.462551 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:08.462562 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:08.462572 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:08.462583 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:08.462594 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:08.462605 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:08.462623 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:08.462634 | orchestrator | 2026-01-05 00:36:08.462645 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-05 00:36:08.462656 | orchestrator | Monday 05 January 2026 00:35:58 +0000 (0:00:01.158) 0:07:09.932 ******** 2026-01-05 00:36:08.462667 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:08.462678 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:08.462689 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:08.462700 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:08.462711 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:08.462722 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:08.462732 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:08.462743 | orchestrator | 2026-01-05 00:36:08.462775 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-05 00:36:08.462786 | orchestrator | Monday 05 January 2026 00:35:58 +0000 (0:00:00.546) 0:07:10.478 ******** 2026-01-05 00:36:08.462797 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:08.462808 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:08.462819 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:08.462830 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:08.462841 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:08.462852 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:08.462863 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:08.462873 | orchestrator | 2026-01-05 00:36:08.462884 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-05 00:36:08.462895 | orchestrator | Monday 05 January 2026 00:36:00 +0000 (0:00:01.575) 0:07:12.054 ******** 2026-01-05 00:36:08.462906 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:08.462917 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:08.462928 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:08.462939 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:08.462950 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:08.462961 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:08.462972 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:08.462983 | orchestrator | 2026-01-05 00:36:08.462994 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-05 00:36:08.463004 | orchestrator | Monday 05 January 2026 00:36:00 +0000 (0:00:00.551) 0:07:12.605 ******** 2026-01-05 00:36:08.463015 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:08.463026 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:08.463037 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:08.463048 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:08.463066 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:08.463077 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:08.463095 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:43.014840 | orchestrator | 2026-01-05 00:36:43.014981 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-05 00:36:43.014999 | orchestrator | Monday 05 January 2026 00:36:08 +0000 (0:00:07.586) 0:07:20.192 ******** 2026-01-05 00:36:43.015012 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015025 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:43.015036 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:43.015047 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:43.015058 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:43.015069 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:43.015080 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:43.015091 | orchestrator | 2026-01-05 00:36:43.015103 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-05 00:36:43.015114 | orchestrator | Monday 05 January 2026 00:36:10 +0000 (0:00:01.758) 0:07:21.950 ******** 2026-01-05 00:36:43.015125 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015136 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:43.015147 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:43.015157 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:43.015168 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:43.015178 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:43.015189 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:43.015200 | orchestrator | 2026-01-05 00:36:43.015211 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-05 00:36:43.015223 | orchestrator | Monday 05 January 2026 00:36:11 +0000 (0:00:01.762) 0:07:23.713 ******** 2026-01-05 00:36:43.015234 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015245 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:43.015255 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:43.015266 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:43.015276 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:43.015287 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:43.015298 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:43.015308 | orchestrator | 2026-01-05 00:36:43.015319 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:36:43.015330 | orchestrator | Monday 05 January 2026 00:36:13 +0000 (0:00:01.818) 0:07:25.531 ******** 2026-01-05 00:36:43.015341 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015352 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.015362 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.015373 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.015384 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.015394 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.015405 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.015416 | orchestrator | 2026-01-05 00:36:43.015426 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:36:43.015437 | orchestrator | Monday 05 January 2026 00:36:14 +0000 (0:00:00.926) 0:07:26.458 ******** 2026-01-05 00:36:43.015448 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:43.015459 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:43.015469 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:43.015480 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:43.015491 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:43.015501 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:43.015512 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:43.015523 | orchestrator | 2026-01-05 00:36:43.015533 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-05 00:36:43.015544 | orchestrator | Monday 05 January 2026 00:36:15 +0000 (0:00:01.179) 0:07:27.637 ******** 2026-01-05 00:36:43.015556 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:43.015566 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:43.015606 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:43.015617 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:43.015645 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:43.015656 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:43.015667 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:43.015677 | orchestrator | 2026-01-05 00:36:43.015688 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-05 00:36:43.015699 | orchestrator | Monday 05 January 2026 00:36:16 +0000 (0:00:00.595) 0:07:28.233 ******** 2026-01-05 00:36:43.015709 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015744 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.015756 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.015767 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.015777 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.015788 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.015799 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.015809 | orchestrator | 2026-01-05 00:36:43.015820 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-05 00:36:43.015831 | orchestrator | Monday 05 January 2026 00:36:17 +0000 (0:00:00.596) 0:07:28.829 ******** 2026-01-05 00:36:43.015842 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015853 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.015863 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.015874 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.015885 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.015896 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.015906 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.015917 | orchestrator | 2026-01-05 00:36:43.015928 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-05 00:36:43.015939 | orchestrator | Monday 05 January 2026 00:36:17 +0000 (0:00:00.577) 0:07:29.406 ******** 2026-01-05 00:36:43.015950 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.015961 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.015971 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.015982 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.015993 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.016003 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.016014 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.016025 | orchestrator | 2026-01-05 00:36:43.016036 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-05 00:36:43.016047 | orchestrator | Monday 05 January 2026 00:36:18 +0000 (0:00:00.786) 0:07:30.193 ******** 2026-01-05 00:36:43.016058 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.016068 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.016079 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.016090 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.016101 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.016111 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.016122 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.016133 | orchestrator | 2026-01-05 00:36:43.016163 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-05 00:36:43.016175 | orchestrator | Monday 05 January 2026 00:36:24 +0000 (0:00:05.715) 0:07:35.908 ******** 2026-01-05 00:36:43.016186 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:43.016197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:43.016207 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:43.016218 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:43.016229 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:43.016240 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:43.016251 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:43.016261 | orchestrator | 2026-01-05 00:36:43.016272 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-05 00:36:43.016283 | orchestrator | Monday 05 January 2026 00:36:24 +0000 (0:00:00.571) 0:07:36.479 ******** 2026-01-05 00:36:43.016295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:36:43.016322 | orchestrator | 2026-01-05 00:36:43.016333 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-05 00:36:43.016344 | orchestrator | Monday 05 January 2026 00:36:25 +0000 (0:00:01.148) 0:07:37.628 ******** 2026-01-05 00:36:43.016354 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.016365 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.016376 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.016386 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.016397 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.016407 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.016418 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.016428 | orchestrator | 2026-01-05 00:36:43.016439 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-05 00:36:43.016450 | orchestrator | Monday 05 January 2026 00:36:27 +0000 (0:00:01.970) 0:07:39.599 ******** 2026-01-05 00:36:43.016460 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.016471 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.016482 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.016492 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.016503 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.016513 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.016524 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.016534 | orchestrator | 2026-01-05 00:36:43.016545 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-05 00:36:43.016556 | orchestrator | Monday 05 January 2026 00:36:28 +0000 (0:00:01.148) 0:07:40.748 ******** 2026-01-05 00:36:43.016566 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:43.016577 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:43.016588 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:43.016598 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:43.016609 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:43.016620 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:43.016630 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:43.016641 | orchestrator | 2026-01-05 00:36:43.016652 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-05 00:36:43.016663 | orchestrator | Monday 05 January 2026 00:36:29 +0000 (0:00:00.872) 0:07:41.620 ******** 2026-01-05 00:36:43.016674 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016686 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016698 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016753 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016768 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016779 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016790 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:36:43.016801 | orchestrator | 2026-01-05 00:36:43.016811 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-05 00:36:43.016822 | orchestrator | Monday 05 January 2026 00:36:31 +0000 (0:00:02.127) 0:07:43.748 ******** 2026-01-05 00:36:43.016833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:36:43.016852 | orchestrator | 2026-01-05 00:36:43.016863 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-05 00:36:43.016874 | orchestrator | Monday 05 January 2026 00:36:32 +0000 (0:00:00.903) 0:07:44.651 ******** 2026-01-05 00:36:43.016884 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:43.016895 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:43.016906 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:43.016917 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:43.016927 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:43.016938 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:43.016949 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:43.016959 | orchestrator | 2026-01-05 00:36:43.016978 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-05 00:37:16.050918 | orchestrator | Monday 05 January 2026 00:36:43 +0000 (0:00:10.097) 0:07:54.749 ******** 2026-01-05 00:37:16.051030 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:16.051047 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:16.051059 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:16.051070 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:16.051082 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:16.051093 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:16.051110 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:16.051130 | orchestrator | 2026-01-05 00:37:16.051150 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-05 00:37:16.051169 | orchestrator | Monday 05 January 2026 00:36:45 +0000 (0:00:02.133) 0:07:56.882 ******** 2026-01-05 00:37:16.051187 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:16.051206 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:16.051224 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:16.051242 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:16.051260 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:16.051280 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:16.051297 | orchestrator | 2026-01-05 00:37:16.051318 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-05 00:37:16.051337 | orchestrator | Monday 05 January 2026 00:36:46 +0000 (0:00:01.396) 0:07:58.279 ******** 2026-01-05 00:37:16.051357 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.051378 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.051397 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.051408 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.051420 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.051431 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.051445 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.051458 | orchestrator | 2026-01-05 00:37:16.051471 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-05 00:37:16.051484 | orchestrator | 2026-01-05 00:37:16.051498 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-05 00:37:16.051510 | orchestrator | Monday 05 January 2026 00:36:47 +0000 (0:00:01.314) 0:07:59.593 ******** 2026-01-05 00:37:16.051524 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:37:16.051538 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:16.051551 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:16.051564 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:16.051577 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:16.051590 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:16.051604 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:16.051617 | orchestrator | 2026-01-05 00:37:16.051631 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-05 00:37:16.051646 | orchestrator | 2026-01-05 00:37:16.051666 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-05 00:37:16.051680 | orchestrator | Monday 05 January 2026 00:36:48 +0000 (0:00:00.833) 0:08:00.427 ******** 2026-01-05 00:37:16.051773 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.051789 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.051802 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.051813 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.051823 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.051834 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.051845 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.051856 | orchestrator | 2026-01-05 00:37:16.051867 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-05 00:37:16.051879 | orchestrator | Monday 05 January 2026 00:36:50 +0000 (0:00:01.425) 0:08:01.852 ******** 2026-01-05 00:37:16.051890 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:16.051900 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:16.051911 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:16.051922 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:16.051947 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:16.051958 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:16.051969 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:16.051980 | orchestrator | 2026-01-05 00:37:16.051991 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-05 00:37:16.052002 | orchestrator | Monday 05 January 2026 00:36:51 +0000 (0:00:01.593) 0:08:03.445 ******** 2026-01-05 00:37:16.052013 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:37:16.052024 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:16.052034 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:16.052045 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:16.052056 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:16.052067 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:16.052078 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:16.052089 | orchestrator | 2026-01-05 00:37:16.052099 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-05 00:37:16.052111 | orchestrator | Monday 05 January 2026 00:36:52 +0000 (0:00:00.569) 0:08:04.015 ******** 2026-01-05 00:37:16.052122 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:37:16.052135 | orchestrator | 2026-01-05 00:37:16.052146 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-05 00:37:16.052157 | orchestrator | Monday 05 January 2026 00:36:53 +0000 (0:00:01.148) 0:08:05.164 ******** 2026-01-05 00:37:16.052169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:37:16.052182 | orchestrator | 2026-01-05 00:37:16.052193 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-05 00:37:16.052204 | orchestrator | Monday 05 January 2026 00:36:54 +0000 (0:00:00.879) 0:08:06.044 ******** 2026-01-05 00:37:16.052215 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.052226 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.052237 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.052248 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.052259 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.052269 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.052280 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.052291 | orchestrator | 2026-01-05 00:37:16.052321 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-05 00:37:16.052333 | orchestrator | Monday 05 January 2026 00:37:03 +0000 (0:00:09.282) 0:08:15.326 ******** 2026-01-05 00:37:16.052344 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.052354 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.052365 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.052376 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.052395 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.052405 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.052416 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.052427 | orchestrator | 2026-01-05 00:37:16.052438 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-05 00:37:16.052449 | orchestrator | Monday 05 January 2026 00:37:04 +0000 (0:00:01.207) 0:08:16.534 ******** 2026-01-05 00:37:16.052460 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.052470 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.052481 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.052492 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.052502 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.052513 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.052523 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.052534 | orchestrator | 2026-01-05 00:37:16.052545 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-05 00:37:16.052555 | orchestrator | Monday 05 January 2026 00:37:06 +0000 (0:00:01.364) 0:08:17.898 ******** 2026-01-05 00:37:16.052566 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.052577 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.052588 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.052598 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.052609 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.052619 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.052630 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.052641 | orchestrator | 2026-01-05 00:37:16.052652 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-05 00:37:16.052662 | orchestrator | Monday 05 January 2026 00:37:08 +0000 (0:00:02.080) 0:08:19.979 ******** 2026-01-05 00:37:16.052673 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.052684 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.052717 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.052737 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.052757 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.052774 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.052789 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.052800 | orchestrator | 2026-01-05 00:37:16.052811 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-05 00:37:16.052822 | orchestrator | Monday 05 January 2026 00:37:09 +0000 (0:00:01.235) 0:08:21.215 ******** 2026-01-05 00:37:16.052833 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.052844 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.052854 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.052865 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.052876 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.052887 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.052898 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.052909 | orchestrator | 2026-01-05 00:37:16.052919 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-05 00:37:16.052930 | orchestrator | 2026-01-05 00:37:16.052941 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-05 00:37:16.052952 | orchestrator | Monday 05 January 2026 00:37:10 +0000 (0:00:01.151) 0:08:22.366 ******** 2026-01-05 00:37:16.052970 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:37:16.052981 | orchestrator | 2026-01-05 00:37:16.052992 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:37:16.053003 | orchestrator | Monday 05 January 2026 00:37:11 +0000 (0:00:00.883) 0:08:23.250 ******** 2026-01-05 00:37:16.053014 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:16.053025 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:16.053036 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:16.053054 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:16.053065 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:16.053076 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:16.053087 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:16.053098 | orchestrator | 2026-01-05 00:37:16.053108 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:37:16.053119 | orchestrator | Monday 05 January 2026 00:37:12 +0000 (0:00:01.173) 0:08:24.423 ******** 2026-01-05 00:37:16.053130 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:16.053141 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:16.053152 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:16.053163 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:16.053173 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:16.053184 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:16.053195 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:16.053205 | orchestrator | 2026-01-05 00:37:16.053216 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-05 00:37:16.053227 | orchestrator | Monday 05 January 2026 00:37:13 +0000 (0:00:01.294) 0:08:25.718 ******** 2026-01-05 00:37:16.053238 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:37:16.053249 | orchestrator | 2026-01-05 00:37:16.053260 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:37:16.053271 | orchestrator | Monday 05 January 2026 00:37:14 +0000 (0:00:00.929) 0:08:26.647 ******** 2026-01-05 00:37:16.053281 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:16.053292 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:16.053303 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:16.053314 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:16.053324 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:16.053335 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:16.053346 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:16.053356 | orchestrator | 2026-01-05 00:37:16.053375 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:37:17.880133 | orchestrator | Monday 05 January 2026 00:37:16 +0000 (0:00:01.136) 0:08:27.784 ******** 2026-01-05 00:37:17.880275 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:17.880294 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:17.880306 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:17.880318 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:17.880329 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:17.880341 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:17.880352 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:17.880364 | orchestrator | 2026-01-05 00:37:17.880376 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:17.880389 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-05 00:37:17.880405 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:37:17.880424 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:37:17.880436 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:37:17.880447 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-05 00:37:17.880458 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:37:17.880472 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:37:17.880521 | orchestrator | 2026-01-05 00:37:17.880535 | orchestrator | 2026-01-05 00:37:17.880549 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:17.880563 | orchestrator | Monday 05 January 2026 00:37:17 +0000 (0:00:01.184) 0:08:28.969 ******** 2026-01-05 00:37:17.880576 | orchestrator | =============================================================================== 2026-01-05 00:37:17.880589 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.75s 2026-01-05 00:37:17.880605 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.19s 2026-01-05 00:37:17.880624 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.43s 2026-01-05 00:37:17.880860 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.44s 2026-01-05 00:37:17.880882 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.38s 2026-01-05 00:37:17.880893 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.10s 2026-01-05 00:37:17.880906 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.50s 2026-01-05 00:37:17.880936 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.10s 2026-01-05 00:37:17.880947 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.85s 2026-01-05 00:37:17.880958 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.73s 2026-01-05 00:37:17.880969 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.28s 2026-01-05 00:37:17.880980 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.64s 2026-01-05 00:37:17.880990 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.44s 2026-01-05 00:37:17.881001 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.95s 2026-01-05 00:37:17.881012 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.71s 2026-01-05 00:37:17.881022 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.59s 2026-01-05 00:37:17.881034 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.70s 2026-01-05 00:37:17.881045 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.46s 2026-01-05 00:37:17.881056 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.21s 2026-01-05 00:37:17.881067 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.92s 2026-01-05 00:37:18.350480 | orchestrator | + osism apply fail2ban 2026-01-05 00:37:31.499272 | orchestrator | 2026-01-05 00:37:31 | INFO  | Task 557eeb64-5b1e-4392-9bac-06e42227c0e5 (fail2ban) was prepared for execution. 2026-01-05 00:37:31.499376 | orchestrator | 2026-01-05 00:37:31 | INFO  | It takes a moment until task 557eeb64-5b1e-4392-9bac-06e42227c0e5 (fail2ban) has been started and output is visible here. 2026-01-05 00:37:54.256335 | orchestrator | 2026-01-05 00:37:54.256457 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-05 00:37:54.256472 | orchestrator | 2026-01-05 00:37:54.256483 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-05 00:37:54.256494 | orchestrator | Monday 05 January 2026 00:37:36 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-01-05 00:37:54.256505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:37:54.256518 | orchestrator | 2026-01-05 00:37:54.256528 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-05 00:37:54.256538 | orchestrator | Monday 05 January 2026 00:37:37 +0000 (0:00:01.211) 0:00:01.486 ******** 2026-01-05 00:37:54.256575 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:54.256586 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:54.256596 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:54.256606 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:54.256615 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:54.256632 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:54.256648 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:54.256664 | orchestrator | 2026-01-05 00:37:54.256749 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-05 00:37:54.256767 | orchestrator | Monday 05 January 2026 00:37:48 +0000 (0:00:11.303) 0:00:12.789 ******** 2026-01-05 00:37:54.256783 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:54.256800 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:54.256811 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:54.256820 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:54.256830 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:54.256839 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:54.256849 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:54.256858 | orchestrator | 2026-01-05 00:37:54.256868 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-05 00:37:54.256878 | orchestrator | Monday 05 January 2026 00:37:50 +0000 (0:00:01.488) 0:00:14.278 ******** 2026-01-05 00:37:54.256888 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:54.256899 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:54.256909 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:54.256918 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:54.256928 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:54.256938 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:54.256947 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:54.256957 | orchestrator | 2026-01-05 00:37:54.256967 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-05 00:37:54.256977 | orchestrator | Monday 05 January 2026 00:37:51 +0000 (0:00:01.523) 0:00:15.801 ******** 2026-01-05 00:37:54.256986 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:37:54.256996 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:54.257006 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:37:54.257016 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:37:54.257025 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:37:54.257035 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:37:54.257045 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:37:54.257055 | orchestrator | 2026-01-05 00:37:54.257065 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:54.257075 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257085 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257095 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257124 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257135 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257144 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257154 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:54.257164 | orchestrator | 2026-01-05 00:37:54.257173 | orchestrator | 2026-01-05 00:37:54.257183 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:54.257201 | orchestrator | Monday 05 January 2026 00:37:53 +0000 (0:00:01.835) 0:00:17.637 ******** 2026-01-05 00:37:54.257212 | orchestrator | =============================================================================== 2026-01-05 00:37:54.257221 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.30s 2026-01-05 00:37:54.257231 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.84s 2026-01-05 00:37:54.257240 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.52s 2026-01-05 00:37:54.257250 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.49s 2026-01-05 00:37:54.257260 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.21s 2026-01-05 00:37:54.680949 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-05 00:37:54.681050 | orchestrator | + osism apply network 2026-01-05 00:38:07.026769 | orchestrator | 2026-01-05 00:38:07 | INFO  | Task 662bef33-3830-4645-9a99-075dcfe210e0 (network) was prepared for execution. 2026-01-05 00:38:07.026896 | orchestrator | 2026-01-05 00:38:07 | INFO  | It takes a moment until task 662bef33-3830-4645-9a99-075dcfe210e0 (network) has been started and output is visible here. 2026-01-05 00:38:37.002846 | orchestrator | 2026-01-05 00:38:37.003008 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-05 00:38:37.003034 | orchestrator | 2026-01-05 00:38:37.003055 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-05 00:38:37.003074 | orchestrator | Monday 05 January 2026 00:38:11 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-01-05 00:38:37.003093 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.003112 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.003131 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.003150 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.003169 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.003188 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.003207 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.003224 | orchestrator | 2026-01-05 00:38:37.003243 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-05 00:38:37.003262 | orchestrator | Monday 05 January 2026 00:38:11 +0000 (0:00:00.777) 0:00:01.058 ******** 2026-01-05 00:38:37.003283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:38:37.003304 | orchestrator | 2026-01-05 00:38:37.003326 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-05 00:38:37.003346 | orchestrator | Monday 05 January 2026 00:38:13 +0000 (0:00:01.323) 0:00:02.381 ******** 2026-01-05 00:38:37.003369 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.003392 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.003415 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.003437 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.003459 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.003481 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.003503 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.003525 | orchestrator | 2026-01-05 00:38:37.003548 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-05 00:38:37.003572 | orchestrator | Monday 05 January 2026 00:38:15 +0000 (0:00:02.047) 0:00:04.429 ******** 2026-01-05 00:38:37.003595 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.003617 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.003638 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.003688 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.003710 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.003731 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.003751 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.003772 | orchestrator | 2026-01-05 00:38:37.003792 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-05 00:38:37.003848 | orchestrator | Monday 05 January 2026 00:38:17 +0000 (0:00:01.820) 0:00:06.249 ******** 2026-01-05 00:38:37.003870 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-05 00:38:37.003890 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-05 00:38:37.003907 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-05 00:38:37.003925 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-05 00:38:37.003944 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-05 00:38:37.003961 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-05 00:38:37.003979 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-05 00:38:37.004000 | orchestrator | 2026-01-05 00:38:37.004017 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-05 00:38:37.004034 | orchestrator | Monday 05 January 2026 00:38:18 +0000 (0:00:00.999) 0:00:07.248 ******** 2026-01-05 00:38:37.004050 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:38:37.004067 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:38:37.004083 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:38:37.004100 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:38:37.004115 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:38:37.004132 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:38:37.004147 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:38:37.004163 | orchestrator | 2026-01-05 00:38:37.004180 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-05 00:38:37.004197 | orchestrator | Monday 05 January 2026 00:38:21 +0000 (0:00:03.447) 0:00:10.696 ******** 2026-01-05 00:38:37.004216 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:37.004233 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:37.004250 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:37.004266 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:37.004282 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:37.004298 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:37.004316 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:37.004332 | orchestrator | 2026-01-05 00:38:37.004349 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-05 00:38:37.004365 | orchestrator | Monday 05 January 2026 00:38:23 +0000 (0:00:01.737) 0:00:12.433 ******** 2026-01-05 00:38:37.004384 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:38:37.004400 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:38:37.004416 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:38:37.004433 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:38:37.004449 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:38:37.004465 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:38:37.004481 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:38:37.004497 | orchestrator | 2026-01-05 00:38:37.004513 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-05 00:38:37.004531 | orchestrator | Monday 05 January 2026 00:38:25 +0000 (0:00:01.830) 0:00:14.263 ******** 2026-01-05 00:38:37.004547 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.004564 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.004580 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.004595 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.004611 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.004628 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.004688 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.004710 | orchestrator | 2026-01-05 00:38:37.004727 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-05 00:38:37.004778 | orchestrator | Monday 05 January 2026 00:38:26 +0000 (0:00:01.302) 0:00:15.566 ******** 2026-01-05 00:38:37.004798 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:38:37.004816 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:37.004832 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:37.004873 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:37.004891 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:37.004907 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:37.004923 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:37.004939 | orchestrator | 2026-01-05 00:38:37.004956 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-05 00:38:37.004973 | orchestrator | Monday 05 January 2026 00:38:27 +0000 (0:00:00.782) 0:00:16.348 ******** 2026-01-05 00:38:37.004989 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.005005 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.005020 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.005036 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.005052 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.005069 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.005085 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.005101 | orchestrator | 2026-01-05 00:38:37.005117 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-05 00:38:37.005133 | orchestrator | Monday 05 January 2026 00:38:29 +0000 (0:00:02.299) 0:00:18.648 ******** 2026-01-05 00:38:37.005150 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:37.005166 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:37.005182 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:37.005198 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:37.005214 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:37.005231 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:37.005248 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-05 00:38:37.005265 | orchestrator | 2026-01-05 00:38:37.005281 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-05 00:38:37.005296 | orchestrator | Monday 05 January 2026 00:38:30 +0000 (0:00:00.984) 0:00:19.633 ******** 2026-01-05 00:38:37.005312 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.005328 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:37.005344 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:37.005359 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:37.005375 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:37.005391 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:37.005409 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:37.005425 | orchestrator | 2026-01-05 00:38:37.005440 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-05 00:38:37.005457 | orchestrator | Monday 05 January 2026 00:38:32 +0000 (0:00:01.713) 0:00:21.346 ******** 2026-01-05 00:38:37.005475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:38:37.005495 | orchestrator | 2026-01-05 00:38:37.005513 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:38:37.005529 | orchestrator | Monday 05 January 2026 00:38:33 +0000 (0:00:01.387) 0:00:22.733 ******** 2026-01-05 00:38:37.005544 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.005562 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.005607 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.005628 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.005719 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.005742 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.005760 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.005777 | orchestrator | 2026-01-05 00:38:37.005796 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-05 00:38:37.005822 | orchestrator | Monday 05 January 2026 00:38:34 +0000 (0:00:01.244) 0:00:23.978 ******** 2026-01-05 00:38:37.005840 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:37.005858 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:37.005876 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:37.005909 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:37.005927 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:37.005945 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:37.005962 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:37.005981 | orchestrator | 2026-01-05 00:38:37.005998 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:38:37.006099 | orchestrator | Monday 05 January 2026 00:38:35 +0000 (0:00:00.739) 0:00:24.717 ******** 2026-01-05 00:38:37.006126 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006144 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006160 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006179 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006197 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006215 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006233 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006249 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006266 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006284 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006301 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006320 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006337 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:38:37.006352 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:38:37.006367 | orchestrator | 2026-01-05 00:38:37.006404 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-05 00:38:55.085210 | orchestrator | Monday 05 January 2026 00:38:36 +0000 (0:00:01.448) 0:00:26.166 ******** 2026-01-05 00:38:55.085334 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:38:55.085350 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:55.085361 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:55.085371 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:55.085381 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:55.085390 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:55.085401 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:55.085411 | orchestrator | 2026-01-05 00:38:55.085421 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-05 00:38:55.085432 | orchestrator | Monday 05 January 2026 00:38:37 +0000 (0:00:00.662) 0:00:26.828 ******** 2026-01-05 00:38:55.085444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-01-05 00:38:55.085456 | orchestrator | 2026-01-05 00:38:55.085467 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-05 00:38:55.085477 | orchestrator | Monday 05 January 2026 00:38:42 +0000 (0:00:04.793) 0:00:31.622 ******** 2026-01-05 00:38:55.085488 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085582 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085725 | orchestrator | 2026-01-05 00:38:55.085737 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-05 00:38:55.085748 | orchestrator | Monday 05 January 2026 00:38:48 +0000 (0:00:06.289) 0:00:37.912 ******** 2026-01-05 00:38:55.085760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085782 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:38:55.085859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085872 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:38:55.085927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:39:11.078444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:39:11.078564 | orchestrator | 2026-01-05 00:39:11.078577 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-05 00:39:11.078588 | orchestrator | Monday 05 January 2026 00:38:55 +0000 (0:00:06.330) 0:00:44.242 ******** 2026-01-05 00:39:11.078624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:39:11.078708 | orchestrator | 2026-01-05 00:39:11.078723 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:39:11.078737 | orchestrator | Monday 05 January 2026 00:38:56 +0000 (0:00:01.467) 0:00:45.709 ******** 2026-01-05 00:39:11.078774 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:11.078791 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:39:11.078805 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:39:11.078819 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:39:11.078832 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:39:11.078846 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:39:11.078860 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:39:11.078869 | orchestrator | 2026-01-05 00:39:11.078877 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:39:11.078885 | orchestrator | Monday 05 January 2026 00:38:57 +0000 (0:00:01.339) 0:00:47.049 ******** 2026-01-05 00:39:11.078893 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.078903 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.078911 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.078920 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.078928 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.078937 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.078945 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.078953 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.078961 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.078969 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.078977 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.078985 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.078993 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.079020 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.079028 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.079036 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.079044 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.079052 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.079062 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.079076 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.079089 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.079103 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.079118 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.079131 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.079144 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.079156 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.079164 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.079180 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.079189 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.079203 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.079216 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:39:11.079229 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:39:11.079243 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:39:11.079257 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:39:11.079270 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.079283 | orchestrator | 2026-01-05 00:39:11.079295 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-05 00:39:11.079322 | orchestrator | Monday 05 January 2026 00:38:58 +0000 (0:00:01.101) 0:00:48.150 ******** 2026-01-05 00:39:11.079331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:39:11.079339 | orchestrator | 2026-01-05 00:39:11.079347 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-05 00:39:11.079355 | orchestrator | Monday 05 January 2026 00:39:00 +0000 (0:00:01.323) 0:00:49.474 ******** 2026-01-05 00:39:11.079363 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.079371 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.079379 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.079387 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.079394 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.079402 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.079410 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.079418 | orchestrator | 2026-01-05 00:39:11.079426 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-05 00:39:11.079434 | orchestrator | Monday 05 January 2026 00:39:01 +0000 (0:00:00.715) 0:00:50.190 ******** 2026-01-05 00:39:11.079441 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.079449 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.079457 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.079465 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.079472 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.079480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.079488 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.079495 | orchestrator | 2026-01-05 00:39:11.079508 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-05 00:39:11.079521 | orchestrator | Monday 05 January 2026 00:39:01 +0000 (0:00:00.929) 0:00:51.119 ******** 2026-01-05 00:39:11.079535 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.079549 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.079563 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.079575 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.079589 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.079597 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.079607 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.079620 | orchestrator | 2026-01-05 00:39:11.079653 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-05 00:39:11.079666 | orchestrator | Monday 05 January 2026 00:39:02 +0000 (0:00:00.750) 0:00:51.869 ******** 2026-01-05 00:39:11.079680 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.079694 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.079708 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.079721 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.079743 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.079752 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.079759 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.079767 | orchestrator | 2026-01-05 00:39:11.079775 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-05 00:39:11.079784 | orchestrator | Monday 05 January 2026 00:39:03 +0000 (0:00:00.899) 0:00:52.769 ******** 2026-01-05 00:39:11.079791 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:39:11.079799 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:39:11.079807 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:39:11.079815 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:39:11.079823 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:39:11.079837 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:39:11.079845 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:11.079853 | orchestrator | 2026-01-05 00:39:11.079861 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-05 00:39:11.079869 | orchestrator | Monday 05 January 2026 00:39:06 +0000 (0:00:02.529) 0:00:55.298 ******** 2026-01-05 00:39:11.079877 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:11.079885 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:39:11.079893 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:39:11.079900 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:39:11.079908 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:39:11.079916 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:39:11.079924 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:39:11.079931 | orchestrator | 2026-01-05 00:39:11.079939 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-05 00:39:11.079950 | orchestrator | Monday 05 January 2026 00:39:07 +0000 (0:00:01.370) 0:00:56.668 ******** 2026-01-05 00:39:11.079964 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:11.079976 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:39:11.079989 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:39:11.080003 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:39:11.080017 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:39:11.080031 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:39:11.080044 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:39:11.080057 | orchestrator | 2026-01-05 00:39:11.080067 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-05 00:39:11.080081 | orchestrator | Monday 05 January 2026 00:39:09 +0000 (0:00:02.339) 0:00:59.007 ******** 2026-01-05 00:39:11.080095 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.080108 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.080123 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.080137 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.080149 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.080162 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.080171 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.080179 | orchestrator | 2026-01-05 00:39:11.080187 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-05 00:39:11.080195 | orchestrator | Monday 05 January 2026 00:39:10 +0000 (0:00:00.596) 0:00:59.604 ******** 2026-01-05 00:39:11.080203 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:39:11.080210 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:39:11.080218 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:39:11.080226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:39:11.080234 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:39:11.080242 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:39:11.080250 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:39:11.080257 | orchestrator | 2026-01-05 00:39:11.080265 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:39:11.383260 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 00:39:11.383376 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:39:11.383422 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:39:11.383435 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:39:11.383446 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:39:11.383457 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:39:11.383468 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 00:39:11.383480 | orchestrator | 2026-01-05 00:39:11.383492 | orchestrator | 2026-01-05 00:39:11.383504 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:39:11.383517 | orchestrator | Monday 05 January 2026 00:39:11 +0000 (0:00:00.646) 0:01:00.251 ******** 2026-01-05 00:39:11.383528 | orchestrator | =============================================================================== 2026-01-05 00:39:11.383539 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.33s 2026-01-05 00:39:11.383550 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.29s 2026-01-05 00:39:11.383561 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.79s 2026-01-05 00:39:11.383571 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.45s 2026-01-05 00:39:11.383583 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 2.53s 2026-01-05 00:39:11.383594 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.34s 2026-01-05 00:39:11.383604 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.30s 2026-01-05 00:39:11.383615 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.05s 2026-01-05 00:39:11.383655 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.83s 2026-01-05 00:39:11.383667 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2026-01-05 00:39:11.383678 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.74s 2026-01-05 00:39:11.383689 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-01-05 00:39:11.383718 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.47s 2026-01-05 00:39:11.383730 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.45s 2026-01-05 00:39:11.383741 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2026-01-05 00:39:11.383752 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.37s 2026-01-05 00:39:11.383763 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.34s 2026-01-05 00:39:11.383774 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.32s 2026-01-05 00:39:11.383785 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.32s 2026-01-05 00:39:11.383798 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.30s 2026-01-05 00:39:11.664424 | orchestrator | + osism apply wireguard 2026-01-05 00:39:23.335915 | orchestrator | 2026-01-05 00:39:23 | INFO  | Task 19676605-149a-43a1-b5cc-a7bc821e8033 (wireguard) was prepared for execution. 2026-01-05 00:39:23.336053 | orchestrator | 2026-01-05 00:39:23 | INFO  | It takes a moment until task 19676605-149a-43a1-b5cc-a7bc821e8033 (wireguard) has been started and output is visible here. 2026-01-05 00:39:44.701144 | orchestrator | 2026-01-05 00:39:44.701293 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-05 00:39:44.701306 | orchestrator | 2026-01-05 00:39:44.701315 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-05 00:39:44.701323 | orchestrator | Monday 05 January 2026 00:39:27 +0000 (0:00:00.229) 0:00:00.229 ******** 2026-01-05 00:39:44.701331 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:44.701340 | orchestrator | 2026-01-05 00:39:44.701347 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-05 00:39:44.701355 | orchestrator | Monday 05 January 2026 00:39:29 +0000 (0:00:01.608) 0:00:01.837 ******** 2026-01-05 00:39:44.701362 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701370 | orchestrator | 2026-01-05 00:39:44.701378 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-05 00:39:44.701385 | orchestrator | Monday 05 January 2026 00:39:36 +0000 (0:00:07.514) 0:00:09.352 ******** 2026-01-05 00:39:44.701392 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701399 | orchestrator | 2026-01-05 00:39:44.701406 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-05 00:39:44.701414 | orchestrator | Monday 05 January 2026 00:39:37 +0000 (0:00:00.591) 0:00:09.944 ******** 2026-01-05 00:39:44.701421 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701428 | orchestrator | 2026-01-05 00:39:44.701435 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-05 00:39:44.701442 | orchestrator | Monday 05 January 2026 00:39:38 +0000 (0:00:00.480) 0:00:10.425 ******** 2026-01-05 00:39:44.701449 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:44.701456 | orchestrator | 2026-01-05 00:39:44.701463 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-05 00:39:44.701471 | orchestrator | Monday 05 January 2026 00:39:38 +0000 (0:00:00.778) 0:00:11.203 ******** 2026-01-05 00:39:44.701478 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:44.701485 | orchestrator | 2026-01-05 00:39:44.701492 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-05 00:39:44.701499 | orchestrator | Monday 05 January 2026 00:39:39 +0000 (0:00:00.457) 0:00:11.660 ******** 2026-01-05 00:39:44.701506 | orchestrator | ok: [testbed-manager] 2026-01-05 00:39:44.701514 | orchestrator | 2026-01-05 00:39:44.701522 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-05 00:39:44.701529 | orchestrator | Monday 05 January 2026 00:39:39 +0000 (0:00:00.442) 0:00:12.103 ******** 2026-01-05 00:39:44.701536 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701544 | orchestrator | 2026-01-05 00:39:44.701551 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-05 00:39:44.701558 | orchestrator | Monday 05 January 2026 00:39:41 +0000 (0:00:01.284) 0:00:13.388 ******** 2026-01-05 00:39:44.701565 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:39:44.701573 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701580 | orchestrator | 2026-01-05 00:39:44.701588 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-05 00:39:44.701595 | orchestrator | Monday 05 January 2026 00:39:41 +0000 (0:00:00.932) 0:00:14.321 ******** 2026-01-05 00:39:44.701602 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701642 | orchestrator | 2026-01-05 00:39:44.701652 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-05 00:39:44.701659 | orchestrator | Monday 05 January 2026 00:39:43 +0000 (0:00:01.567) 0:00:15.888 ******** 2026-01-05 00:39:44.701666 | orchestrator | changed: [testbed-manager] 2026-01-05 00:39:44.701675 | orchestrator | 2026-01-05 00:39:44.701683 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:39:44.701692 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:44.701702 | orchestrator | 2026-01-05 00:39:44.701710 | orchestrator | 2026-01-05 00:39:44.701719 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:39:44.701734 | orchestrator | Monday 05 January 2026 00:39:44 +0000 (0:00:00.861) 0:00:16.750 ******** 2026-01-05 00:39:44.701743 | orchestrator | =============================================================================== 2026-01-05 00:39:44.701752 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.51s 2026-01-05 00:39:44.701761 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2026-01-05 00:39:44.701769 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.57s 2026-01-05 00:39:44.701778 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.28s 2026-01-05 00:39:44.701787 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-01-05 00:39:44.701796 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.86s 2026-01-05 00:39:44.701804 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.78s 2026-01-05 00:39:44.701813 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-01-05 00:39:44.701821 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2026-01-05 00:39:44.701829 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-01-05 00:39:44.701838 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-01-05 00:39:44.934271 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-05 00:39:44.966375 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-05 00:39:44.966475 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-05 00:39:45.038192 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 194 0 --:--:-- --:--:-- --:--:-- 197 2026-01-05 00:39:45.049379 | orchestrator | + osism apply --environment custom workarounds 2026-01-05 00:39:46.877304 | orchestrator | 2026-01-05 00:39:46 | INFO  | Trying to run play workarounds in environment custom 2026-01-05 00:39:56.967698 | orchestrator | 2026-01-05 00:39:56 | INFO  | Task 1a6dfcfe-06be-44e4-9047-f8325f07a4d8 (workarounds) was prepared for execution. 2026-01-05 00:39:56.967823 | orchestrator | 2026-01-05 00:39:56 | INFO  | It takes a moment until task 1a6dfcfe-06be-44e4-9047-f8325f07a4d8 (workarounds) has been started and output is visible here. 2026-01-05 00:40:24.098136 | orchestrator | 2026-01-05 00:40:24.098280 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:40:24.098301 | orchestrator | 2026-01-05 00:40:24.098313 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-05 00:40:24.098325 | orchestrator | Monday 05 January 2026 00:40:01 +0000 (0:00:00.130) 0:00:00.130 ******** 2026-01-05 00:40:24.098337 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098349 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098360 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098371 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098383 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098400 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098444 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-05 00:40:24.098464 | orchestrator | 2026-01-05 00:40:24.098482 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-05 00:40:24.098500 | orchestrator | 2026-01-05 00:40:24.098518 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:40:24.098535 | orchestrator | Monday 05 January 2026 00:40:02 +0000 (0:00:00.880) 0:00:01.011 ******** 2026-01-05 00:40:24.098554 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:24.098639 | orchestrator | 2026-01-05 00:40:24.098661 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-05 00:40:24.098687 | orchestrator | 2026-01-05 00:40:24.098707 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:40:24.098730 | orchestrator | Monday 05 January 2026 00:40:04 +0000 (0:00:02.526) 0:00:03.537 ******** 2026-01-05 00:40:24.098752 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:24.098774 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:24.098803 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:24.098828 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:24.098855 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:24.098878 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:24.098902 | orchestrator | 2026-01-05 00:40:24.098926 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-05 00:40:24.098949 | orchestrator | 2026-01-05 00:40:24.098967 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-05 00:40:24.098985 | orchestrator | Monday 05 January 2026 00:40:06 +0000 (0:00:01.836) 0:00:05.373 ******** 2026-01-05 00:40:24.099004 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:40:24.099023 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:40:24.099041 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:40:24.099061 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:40:24.099082 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:40:24.099099 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:40:24.099118 | orchestrator | 2026-01-05 00:40:24.099139 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-05 00:40:24.099158 | orchestrator | Monday 05 January 2026 00:40:08 +0000 (0:00:01.550) 0:00:06.924 ******** 2026-01-05 00:40:24.099183 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:24.099202 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:24.099231 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:24.099252 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:24.099276 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:24.099294 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:24.099310 | orchestrator | 2026-01-05 00:40:24.099330 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-05 00:40:24.099361 | orchestrator | Monday 05 January 2026 00:40:11 +0000 (0:00:03.743) 0:00:10.668 ******** 2026-01-05 00:40:24.099381 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:24.099398 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:24.099414 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:24.099431 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:24.099446 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:24.099463 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:24.099479 | orchestrator | 2026-01-05 00:40:24.099497 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-05 00:40:24.099516 | orchestrator | 2026-01-05 00:40:24.099535 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-05 00:40:24.099556 | orchestrator | Monday 05 January 2026 00:40:12 +0000 (0:00:00.827) 0:00:11.495 ******** 2026-01-05 00:40:24.099575 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:24.099622 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:24.099644 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:24.099664 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:24.099682 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:24.099703 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:24.099734 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:24.099751 | orchestrator | 2026-01-05 00:40:24.099770 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-05 00:40:24.099786 | orchestrator | Monday 05 January 2026 00:40:14 +0000 (0:00:01.644) 0:00:13.140 ******** 2026-01-05 00:40:24.099803 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:24.099819 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:24.099836 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:24.099856 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:24.099876 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:24.099892 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:24.099933 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:24.099951 | orchestrator | 2026-01-05 00:40:24.099971 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-05 00:40:24.099991 | orchestrator | Monday 05 January 2026 00:40:15 +0000 (0:00:01.708) 0:00:14.848 ******** 2026-01-05 00:40:24.100012 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:24.100029 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:24.100048 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:24.100064 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:24.100084 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:24.100103 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:24.100122 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:24.100140 | orchestrator | 2026-01-05 00:40:24.100159 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-05 00:40:24.100178 | orchestrator | Monday 05 January 2026 00:40:17 +0000 (0:00:01.658) 0:00:16.507 ******** 2026-01-05 00:40:24.100197 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:24.100215 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:24.100227 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:24.100238 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:24.100249 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:24.100260 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:24.100271 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:24.100282 | orchestrator | 2026-01-05 00:40:24.100293 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-05 00:40:24.100304 | orchestrator | Monday 05 January 2026 00:40:19 +0000 (0:00:02.028) 0:00:18.535 ******** 2026-01-05 00:40:24.100315 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:24.100326 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:24.100337 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:24.100348 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:24.100359 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:24.100370 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:24.100381 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:40:24.100392 | orchestrator | 2026-01-05 00:40:24.100403 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-05 00:40:24.100414 | orchestrator | 2026-01-05 00:40:24.100425 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-05 00:40:24.100436 | orchestrator | Monday 05 January 2026 00:40:20 +0000 (0:00:00.740) 0:00:19.276 ******** 2026-01-05 00:40:24.100447 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:24.100458 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:24.100469 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:24.100480 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:24.100491 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:24.100504 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:24.100521 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:24.100536 | orchestrator | 2026-01-05 00:40:24.100547 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:40:24.100560 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:24.100573 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:24.100645 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:24.100660 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:24.100671 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:24.100689 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:24.100701 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:24.100712 | orchestrator | 2026-01-05 00:40:24.100723 | orchestrator | 2026-01-05 00:40:24.100734 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:40:24.100746 | orchestrator | Monday 05 January 2026 00:40:24 +0000 (0:00:03.658) 0:00:22.934 ******** 2026-01-05 00:40:24.100757 | orchestrator | =============================================================================== 2026-01-05 00:40:24.100768 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.74s 2026-01-05 00:40:24.100778 | orchestrator | Install python3-docker -------------------------------------------------- 3.66s 2026-01-05 00:40:24.100789 | orchestrator | Apply netplan configuration --------------------------------------------- 2.53s 2026-01-05 00:40:24.100801 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.03s 2026-01-05 00:40:24.100814 | orchestrator | Apply netplan configuration --------------------------------------------- 1.84s 2026-01-05 00:40:24.100832 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.71s 2026-01-05 00:40:24.100844 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2026-01-05 00:40:24.100855 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-01-05 00:40:24.100866 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2026-01-05 00:40:24.100877 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2026-01-05 00:40:24.100888 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.83s 2026-01-05 00:40:24.100909 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.74s 2026-01-05 00:40:24.708031 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:40:36.347487 | orchestrator | 2026-01-05 00:40:36 | INFO  | Task f4d843a9-9d6d-4d19-b155-e99e8396b5df (reboot) was prepared for execution. 2026-01-05 00:40:36.347619 | orchestrator | 2026-01-05 00:40:36 | INFO  | It takes a moment until task f4d843a9-9d6d-4d19-b155-e99e8396b5df (reboot) has been started and output is visible here. 2026-01-05 00:40:47.076281 | orchestrator | 2026-01-05 00:40:47.076394 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:40:47.076412 | orchestrator | 2026-01-05 00:40:47.076424 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:40:47.076435 | orchestrator | Monday 05 January 2026 00:40:40 +0000 (0:00:00.225) 0:00:00.225 ******** 2026-01-05 00:40:47.076445 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:47.076457 | orchestrator | 2026-01-05 00:40:47.076467 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:40:47.076477 | orchestrator | Monday 05 January 2026 00:40:40 +0000 (0:00:00.098) 0:00:00.324 ******** 2026-01-05 00:40:47.076487 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:47.076497 | orchestrator | 2026-01-05 00:40:47.076507 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:40:47.076540 | orchestrator | Monday 05 January 2026 00:40:41 +0000 (0:00:01.032) 0:00:01.357 ******** 2026-01-05 00:40:47.076551 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:47.076561 | orchestrator | 2026-01-05 00:40:47.076570 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:40:47.076641 | orchestrator | 2026-01-05 00:40:47.076653 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:40:47.076663 | orchestrator | Monday 05 January 2026 00:40:41 +0000 (0:00:00.128) 0:00:01.485 ******** 2026-01-05 00:40:47.076672 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:47.076682 | orchestrator | 2026-01-05 00:40:47.076691 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:40:47.076701 | orchestrator | Monday 05 January 2026 00:40:41 +0000 (0:00:00.110) 0:00:01.595 ******** 2026-01-05 00:40:47.076710 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:47.076720 | orchestrator | 2026-01-05 00:40:47.076730 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:40:47.076740 | orchestrator | Monday 05 January 2026 00:40:42 +0000 (0:00:00.704) 0:00:02.300 ******** 2026-01-05 00:40:47.076749 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:47.076759 | orchestrator | 2026-01-05 00:40:47.076768 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:40:47.076778 | orchestrator | 2026-01-05 00:40:47.076787 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:40:47.076797 | orchestrator | Monday 05 January 2026 00:40:42 +0000 (0:00:00.132) 0:00:02.432 ******** 2026-01-05 00:40:47.076806 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:47.076816 | orchestrator | 2026-01-05 00:40:47.076825 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:40:47.076835 | orchestrator | Monday 05 January 2026 00:40:42 +0000 (0:00:00.270) 0:00:02.703 ******** 2026-01-05 00:40:47.076844 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:47.076854 | orchestrator | 2026-01-05 00:40:47.076863 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:40:47.076873 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.723) 0:00:03.427 ******** 2026-01-05 00:40:47.076882 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:47.076892 | orchestrator | 2026-01-05 00:40:47.076901 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:40:47.076911 | orchestrator | 2026-01-05 00:40:47.076920 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:40:47.076946 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.138) 0:00:03.565 ******** 2026-01-05 00:40:47.076957 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:47.076966 | orchestrator | 2026-01-05 00:40:47.076976 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:40:47.076985 | orchestrator | Monday 05 January 2026 00:40:43 +0000 (0:00:00.123) 0:00:03.689 ******** 2026-01-05 00:40:47.076995 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:47.077004 | orchestrator | 2026-01-05 00:40:47.077014 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:40:47.077023 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.756) 0:00:04.445 ******** 2026-01-05 00:40:47.077035 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:47.077052 | orchestrator | 2026-01-05 00:40:47.077070 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:40:47.077087 | orchestrator | 2026-01-05 00:40:47.077102 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:40:47.077118 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.148) 0:00:04.594 ******** 2026-01-05 00:40:47.077133 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:47.077149 | orchestrator | 2026-01-05 00:40:47.077164 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:40:47.077190 | orchestrator | Monday 05 January 2026 00:40:44 +0000 (0:00:00.118) 0:00:04.712 ******** 2026-01-05 00:40:47.077208 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:47.077223 | orchestrator | 2026-01-05 00:40:47.077239 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:40:47.077256 | orchestrator | Monday 05 January 2026 00:40:45 +0000 (0:00:00.657) 0:00:05.370 ******** 2026-01-05 00:40:47.077273 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:47.077289 | orchestrator | 2026-01-05 00:40:47.077305 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:40:47.077315 | orchestrator | 2026-01-05 00:40:47.077325 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:40:47.077335 | orchestrator | Monday 05 January 2026 00:40:45 +0000 (0:00:00.118) 0:00:05.488 ******** 2026-01-05 00:40:47.077344 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:47.077354 | orchestrator | 2026-01-05 00:40:47.077363 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:40:47.077374 | orchestrator | Monday 05 January 2026 00:40:45 +0000 (0:00:00.111) 0:00:05.600 ******** 2026-01-05 00:40:47.077383 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:47.077393 | orchestrator | 2026-01-05 00:40:47.077403 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:40:47.077413 | orchestrator | Monday 05 January 2026 00:40:46 +0000 (0:00:00.694) 0:00:06.295 ******** 2026-01-05 00:40:47.077441 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:47.077452 | orchestrator | 2026-01-05 00:40:47.077461 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:40:47.077472 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:47.077483 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:47.077493 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:47.077503 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:47.077512 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:47.077522 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:40:47.077531 | orchestrator | 2026-01-05 00:40:47.077541 | orchestrator | 2026-01-05 00:40:47.077551 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:40:47.077561 | orchestrator | Monday 05 January 2026 00:40:46 +0000 (0:00:00.042) 0:00:06.338 ******** 2026-01-05 00:40:47.077570 | orchestrator | =============================================================================== 2026-01-05 00:40:47.077601 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.57s 2026-01-05 00:40:47.077612 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.83s 2026-01-05 00:40:47.077621 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.71s 2026-01-05 00:40:47.523085 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:40:59.994877 | orchestrator | 2026-01-05 00:40:59 | INFO  | Task f59ff3ee-23b9-4e69-8a81-376685b3a0dc (wait-for-connection) was prepared for execution. 2026-01-05 00:40:59.994987 | orchestrator | 2026-01-05 00:40:59 | INFO  | It takes a moment until task f59ff3ee-23b9-4e69-8a81-376685b3a0dc (wait-for-connection) has been started and output is visible here. 2026-01-05 00:41:16.198330 | orchestrator | 2026-01-05 00:41:16.198445 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-05 00:41:16.198460 | orchestrator | 2026-01-05 00:41:16.198472 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-05 00:41:16.198484 | orchestrator | Monday 05 January 2026 00:41:04 +0000 (0:00:00.256) 0:00:00.256 ******** 2026-01-05 00:41:16.198495 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:16.198532 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:16.198544 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:16.198555 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:16.198607 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:16.198621 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:16.198632 | orchestrator | 2026-01-05 00:41:16.198644 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:41:16.198656 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:16.198668 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:16.198680 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:16.198691 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:16.198702 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:16.198713 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:16.198724 | orchestrator | 2026-01-05 00:41:16.198735 | orchestrator | 2026-01-05 00:41:16.198745 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:41:16.198756 | orchestrator | Monday 05 January 2026 00:41:15 +0000 (0:00:11.584) 0:00:11.840 ******** 2026-01-05 00:41:16.198767 | orchestrator | =============================================================================== 2026-01-05 00:41:16.198778 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-01-05 00:41:16.678486 | orchestrator | + osism apply hddtemp 2026-01-05 00:41:28.926324 | orchestrator | 2026-01-05 00:41:28 | INFO  | Task 0b7b6865-6e0f-4e2b-bfed-5deb6456383b (hddtemp) was prepared for execution. 2026-01-05 00:41:28.926459 | orchestrator | 2026-01-05 00:41:28 | INFO  | It takes a moment until task 0b7b6865-6e0f-4e2b-bfed-5deb6456383b (hddtemp) has been started and output is visible here. 2026-01-05 00:41:58.252786 | orchestrator | 2026-01-05 00:41:58.252917 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-05 00:41:58.252935 | orchestrator | 2026-01-05 00:41:58.252948 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-05 00:41:58.252959 | orchestrator | Monday 05 January 2026 00:41:33 +0000 (0:00:00.289) 0:00:00.289 ******** 2026-01-05 00:41:58.252971 | orchestrator | ok: [testbed-manager] 2026-01-05 00:41:58.252984 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:58.252996 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:58.253007 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:58.253018 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:58.253030 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:58.253041 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:58.253052 | orchestrator | 2026-01-05 00:41:58.253063 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-05 00:41:58.253075 | orchestrator | Monday 05 January 2026 00:41:34 +0000 (0:00:00.831) 0:00:01.120 ******** 2026-01-05 00:41:58.253087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:41:58.253130 | orchestrator | 2026-01-05 00:41:58.253142 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-05 00:41:58.253153 | orchestrator | Monday 05 January 2026 00:41:35 +0000 (0:00:01.288) 0:00:02.409 ******** 2026-01-05 00:41:58.253164 | orchestrator | ok: [testbed-manager] 2026-01-05 00:41:58.253175 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:58.253186 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:58.253197 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:58.253207 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:58.253218 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:58.253229 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:58.253240 | orchestrator | 2026-01-05 00:41:58.253251 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-05 00:41:58.253262 | orchestrator | Monday 05 January 2026 00:41:37 +0000 (0:00:02.213) 0:00:04.622 ******** 2026-01-05 00:41:58.253273 | orchestrator | changed: [testbed-manager] 2026-01-05 00:41:58.253284 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:41:58.253296 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:41:58.253309 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:41:58.253322 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:41:58.253335 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:41:58.253347 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:41:58.253360 | orchestrator | 2026-01-05 00:41:58.253373 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-05 00:41:58.253386 | orchestrator | Monday 05 January 2026 00:41:39 +0000 (0:00:01.350) 0:00:05.974 ******** 2026-01-05 00:41:58.253399 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:58.253411 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:58.253424 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:58.253437 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:58.253450 | orchestrator | ok: [testbed-manager] 2026-01-05 00:41:58.253462 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:58.253475 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:58.253488 | orchestrator | 2026-01-05 00:41:58.253500 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-05 00:41:58.253513 | orchestrator | Monday 05 January 2026 00:41:40 +0000 (0:00:01.189) 0:00:07.163 ******** 2026-01-05 00:41:58.253525 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:41:58.253538 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:41:58.253601 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:41:58.253615 | orchestrator | changed: [testbed-manager] 2026-01-05 00:41:58.253629 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:58.253642 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:58.253655 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:58.253666 | orchestrator | 2026-01-05 00:41:58.253677 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-05 00:41:58.253688 | orchestrator | Monday 05 January 2026 00:41:41 +0000 (0:00:00.629) 0:00:07.792 ******** 2026-01-05 00:41:58.253699 | orchestrator | changed: [testbed-manager] 2026-01-05 00:41:58.253710 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:41:58.253721 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:41:58.253732 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:41:58.253743 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:41:58.253754 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:41:58.253765 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:41:58.253776 | orchestrator | 2026-01-05 00:41:58.253786 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-05 00:41:58.253798 | orchestrator | Monday 05 January 2026 00:41:54 +0000 (0:00:13.659) 0:00:21.452 ******** 2026-01-05 00:41:58.253809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:41:58.253832 | orchestrator | 2026-01-05 00:41:58.253843 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-05 00:41:58.253854 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:01.208) 0:00:22.660 ******** 2026-01-05 00:41:58.253865 | orchestrator | changed: [testbed-manager] 2026-01-05 00:41:58.253876 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:41:58.253887 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:41:58.253898 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:41:58.253909 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:41:58.253919 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:41:58.253930 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:41:58.253941 | orchestrator | 2026-01-05 00:41:58.253952 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:41:58.253963 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:41:58.253994 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:58.254007 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:58.254062 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:58.254075 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:58.254086 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:58.254097 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:58.254108 | orchestrator | 2026-01-05 00:41:58.254119 | orchestrator | 2026-01-05 00:41:58.254130 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:41:58.254141 | orchestrator | Monday 05 January 2026 00:41:57 +0000 (0:00:01.885) 0:00:24.546 ******** 2026-01-05 00:41:58.254152 | orchestrator | =============================================================================== 2026-01-05 00:41:58.254163 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.66s 2026-01-05 00:41:58.254174 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.21s 2026-01-05 00:41:58.254185 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2026-01-05 00:41:58.254196 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.35s 2026-01-05 00:41:58.254207 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2026-01-05 00:41:58.254218 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2026-01-05 00:41:58.254229 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2026-01-05 00:41:58.254240 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.83s 2026-01-05 00:41:58.254251 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.63s 2026-01-05 00:41:58.529033 | orchestrator | ++ semver latest 7.1.1 2026-01-05 00:41:58.580632 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:41:58.580706 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:41:58.580714 | orchestrator | + sudo systemctl restart manager.service 2026-01-05 00:42:12.634422 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:42:12.634584 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:42:12.634603 | orchestrator | + local max_attempts=60 2026-01-05 00:42:12.634616 | orchestrator | + local name=ceph-ansible 2026-01-05 00:42:12.634657 | orchestrator | + local attempt_num=1 2026-01-05 00:42:12.634669 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:12.678388 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:12.678512 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:12.678538 | orchestrator | + sleep 5 2026-01-05 00:42:17.683404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:17.707489 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:17.707638 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:17.707657 | orchestrator | + sleep 5 2026-01-05 00:42:22.711862 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:22.748764 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:22.748879 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:22.748896 | orchestrator | + sleep 5 2026-01-05 00:42:27.754127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:27.789869 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:27.789969 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:27.789983 | orchestrator | + sleep 5 2026-01-05 00:42:32.794743 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:32.831099 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:32.831203 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:32.831214 | orchestrator | + sleep 5 2026-01-05 00:42:37.835471 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:37.873633 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:37.873744 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:37.873759 | orchestrator | + sleep 5 2026-01-05 00:42:42.879303 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:42.919288 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:42.919382 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:42.919391 | orchestrator | + sleep 5 2026-01-05 00:42:47.924206 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:47.996014 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:47.996149 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:47.996166 | orchestrator | + sleep 5 2026-01-05 00:42:53.041819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:53.095573 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:53.095728 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:53.095757 | orchestrator | + sleep 5 2026-01-05 00:42:58.098355 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:42:58.132265 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:42:58.132376 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:42:58.132393 | orchestrator | + sleep 5 2026-01-05 00:43:03.136179 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:43:03.165817 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:43:03.248352 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:43:03.248422 | orchestrator | + sleep 5 2026-01-05 00:43:08.169888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:43:08.207908 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:43:08.208282 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:43:08.208301 | orchestrator | + sleep 5 2026-01-05 00:43:13.212654 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:43:13.252365 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:43:13.252483 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:43:13.252498 | orchestrator | + sleep 5 2026-01-05 00:43:18.257640 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:43:18.303397 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:43:18.303511 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:43:18.303612 | orchestrator | + local max_attempts=60 2026-01-05 00:43:18.303635 | orchestrator | + local name=kolla-ansible 2026-01-05 00:43:18.303648 | orchestrator | + local attempt_num=1 2026-01-05 00:43:18.303660 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:43:18.339399 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:43:18.339679 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:43:18.339702 | orchestrator | + local max_attempts=60 2026-01-05 00:43:18.339715 | orchestrator | + local name=osism-ansible 2026-01-05 00:43:18.339739 | orchestrator | + local attempt_num=1 2026-01-05 00:43:18.340144 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:43:18.379228 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:43:18.379324 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:43:18.379338 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:43:18.552046 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-05 00:43:18.726886 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-05 00:43:18.903631 | orchestrator | ARA in osism-ansible already disabled. 2026-01-05 00:43:19.085241 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-05 00:43:19.086294 | orchestrator | + osism apply gather-facts 2026-01-05 00:43:31.454226 | orchestrator | 2026-01-05 00:43:31 | INFO  | Task 74bb4fc2-728a-45f9-bfbe-78e56a4564b4 (gather-facts) was prepared for execution. 2026-01-05 00:43:31.454361 | orchestrator | 2026-01-05 00:43:31 | INFO  | It takes a moment until task 74bb4fc2-728a-45f9-bfbe-78e56a4564b4 (gather-facts) has been started and output is visible here. 2026-01-05 00:43:45.023288 | orchestrator | 2026-01-05 00:43:45.023405 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:43:45.023424 | orchestrator | 2026-01-05 00:43:45.023435 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:43:45.023443 | orchestrator | Monday 05 January 2026 00:43:35 +0000 (0:00:00.200) 0:00:00.200 ******** 2026-01-05 00:43:45.023450 | orchestrator | ok: [testbed-manager] 2026-01-05 00:43:45.023459 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:43:45.023466 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:43:45.023473 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:43:45.023480 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:43:45.023486 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:43:45.023494 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:43:45.023536 | orchestrator | 2026-01-05 00:43:45.023544 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:43:45.023551 | orchestrator | 2026-01-05 00:43:45.023557 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:43:45.023565 | orchestrator | Monday 05 January 2026 00:43:43 +0000 (0:00:08.147) 0:00:08.348 ******** 2026-01-05 00:43:45.023573 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:43:45.023581 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:43:45.023588 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:43:45.023594 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:43:45.023601 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:43:45.023608 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:43:45.023614 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:43:45.023621 | orchestrator | 2026-01-05 00:43:45.023628 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:43:45.023635 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023643 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023650 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023657 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023664 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023671 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023701 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:43:45.023708 | orchestrator | 2026-01-05 00:43:45.023715 | orchestrator | 2026-01-05 00:43:45.023721 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:43:45.023728 | orchestrator | Monday 05 January 2026 00:43:44 +0000 (0:00:00.572) 0:00:08.920 ******** 2026-01-05 00:43:45.023735 | orchestrator | =============================================================================== 2026-01-05 00:43:45.023741 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.15s 2026-01-05 00:43:45.023748 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-01-05 00:43:45.417018 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-05 00:43:45.434864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-05 00:43:45.452670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-05 00:43:45.465833 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-05 00:43:45.478087 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-05 00:43:45.491773 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-05 00:43:45.508467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-05 00:43:45.525045 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-05 00:43:45.540440 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-05 00:43:45.554422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-05 00:43:45.570147 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-05 00:43:45.583794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-05 00:43:45.606473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-05 00:43:45.623449 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-05 00:43:45.644615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-05 00:43:45.660631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-05 00:43:45.681290 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-05 00:43:45.701197 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-05 00:43:45.717687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-05 00:43:45.733456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-05 00:43:45.750593 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-05 00:43:45.869141 | orchestrator | ok: Runtime: 0:25:04.530624 2026-01-05 00:43:45.969373 | 2026-01-05 00:43:45.969512 | TASK [Deploy services] 2026-01-05 00:43:46.503845 | orchestrator | skipping: Conditional result was False 2026-01-05 00:43:46.520762 | 2026-01-05 00:43:46.520950 | TASK [Deploy in a nutshell] 2026-01-05 00:43:47.241281 | orchestrator | + set -e 2026-01-05 00:43:47.241433 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:43:47.241444 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:43:47.241453 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:43:47.241459 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:43:47.241464 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:43:47.241470 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:43:47.241533 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:43:47.241547 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:43:47.241553 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:43:47.241560 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:43:47.241564 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:43:47.241572 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:43:47.241576 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 00:43:47.241584 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 00:43:47.241588 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:43:47.241596 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:43:47.241600 | orchestrator | ++ export ARA=false 2026-01-05 00:43:47.241604 | orchestrator | ++ ARA=false 2026-01-05 00:43:47.241607 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:43:47.241612 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:43:47.241625 | orchestrator | ++ export TEMPEST=true 2026-01-05 00:43:47.241629 | orchestrator | ++ TEMPEST=true 2026-01-05 00:43:47.241633 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:43:47.242555 | orchestrator | 2026-01-05 00:43:47.242564 | orchestrator | # PULL IMAGES 2026-01-05 00:43:47.242569 | orchestrator | 2026-01-05 00:43:47.242573 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:43:47.242578 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:43:47.242583 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.14 2026-01-05 00:43:47.242587 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:43:47.242592 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:43:47.242596 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:43:47.242600 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:43:47.242605 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:43:47.242609 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:43:47.242614 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:43:47.242623 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:43:47.242627 | orchestrator | + echo 2026-01-05 00:43:47.242631 | orchestrator | + echo '# PULL IMAGES' 2026-01-05 00:43:47.242635 | orchestrator | + echo 2026-01-05 00:43:47.243183 | orchestrator | ++ semver latest 7.0.0 2026-01-05 00:43:47.308055 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:43:47.308149 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 00:43:47.308159 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-05 00:43:49.317955 | orchestrator | 2026-01-05 00:43:49 | INFO  | Trying to run play pull-images in environment custom 2026-01-05 00:43:59.402989 | orchestrator | 2026-01-05 00:43:59 | INFO  | Task 1f30aca2-3e98-4f99-888e-823549352c84 (pull-images) was prepared for execution. 2026-01-05 00:43:59.403089 | orchestrator | 2026-01-05 00:43:59 | INFO  | Task 1f30aca2-3e98-4f99-888e-823549352c84 is running in background. No more output. Check ARA for logs. 2026-01-05 00:44:01.959470 | orchestrator | 2026-01-05 00:44:01 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-05 00:44:12.082932 | orchestrator | 2026-01-05 00:44:12 | INFO  | Task e122cb09-ed25-4117-b4b2-ea601e4ac06e (wipe-partitions) was prepared for execution. 2026-01-05 00:44:12.083093 | orchestrator | 2026-01-05 00:44:12 | INFO  | It takes a moment until task e122cb09-ed25-4117-b4b2-ea601e4ac06e (wipe-partitions) has been started and output is visible here. 2026-01-05 00:44:25.263671 | orchestrator | 2026-01-05 00:44:25.263789 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-05 00:44:25.263803 | orchestrator | 2026-01-05 00:44:25.263814 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-05 00:44:25.263830 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.133) 0:00:00.133 ******** 2026-01-05 00:44:25.263843 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:25.263853 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:25.263862 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:25.263872 | orchestrator | 2026-01-05 00:44:25.263881 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-05 00:44:25.263938 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.725) 0:00:00.858 ******** 2026-01-05 00:44:25.263948 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:25.263957 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:25.263970 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:25.263979 | orchestrator | 2026-01-05 00:44:25.263988 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-05 00:44:25.263996 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.465) 0:00:01.323 ******** 2026-01-05 00:44:25.264005 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:25.264015 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:25.264023 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:44:25.264032 | orchestrator | 2026-01-05 00:44:25.264041 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-05 00:44:25.264050 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.620) 0:00:01.944 ******** 2026-01-05 00:44:25.264059 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:25.264068 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:25.264077 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:25.264085 | orchestrator | 2026-01-05 00:44:25.264094 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-05 00:44:25.264103 | orchestrator | Monday 05 January 2026 00:44:18 +0000 (0:00:00.266) 0:00:02.211 ******** 2026-01-05 00:44:25.264112 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:44:25.264124 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:44:25.264133 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:44:25.264142 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:44:25.264151 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:44:25.264159 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:44:25.264168 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:44:25.264176 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:44:25.264185 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:44:25.264196 | orchestrator | 2026-01-05 00:44:25.264206 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-05 00:44:25.264216 | orchestrator | Monday 05 January 2026 00:44:19 +0000 (0:00:01.278) 0:00:03.489 ******** 2026-01-05 00:44:25.264226 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:44:25.264237 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:44:25.264247 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:44:25.264257 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:44:25.264267 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:44:25.264278 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:44:25.264288 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:44:25.264297 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:44:25.264308 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:44:25.264316 | orchestrator | 2026-01-05 00:44:25.264325 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-05 00:44:25.264333 | orchestrator | Monday 05 January 2026 00:44:21 +0000 (0:00:01.692) 0:00:05.181 ******** 2026-01-05 00:44:25.264342 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 00:44:25.264351 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 00:44:25.264359 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 00:44:25.264368 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 00:44:25.264376 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 00:44:25.264391 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 00:44:25.264400 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 00:44:25.264416 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 00:44:25.264425 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 00:44:25.264434 | orchestrator | 2026-01-05 00:44:25.264443 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-05 00:44:25.264498 | orchestrator | Monday 05 January 2026 00:44:23 +0000 (0:00:02.298) 0:00:07.480 ******** 2026-01-05 00:44:25.264507 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:25.264516 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:25.264524 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:25.264533 | orchestrator | 2026-01-05 00:44:25.264541 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-05 00:44:25.264550 | orchestrator | Monday 05 January 2026 00:44:24 +0000 (0:00:00.604) 0:00:08.084 ******** 2026-01-05 00:44:25.264559 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:25.264568 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:25.264576 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:25.264585 | orchestrator | 2026-01-05 00:44:25.264594 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:44:25.264605 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:25.264616 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:25.264642 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:25.264651 | orchestrator | 2026-01-05 00:44:25.264660 | orchestrator | 2026-01-05 00:44:25.264669 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:44:25.264678 | orchestrator | Monday 05 January 2026 00:44:24 +0000 (0:00:00.736) 0:00:08.821 ******** 2026-01-05 00:44:25.264687 | orchestrator | =============================================================================== 2026-01-05 00:44:25.264695 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.30s 2026-01-05 00:44:25.264704 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.69s 2026-01-05 00:44:25.264712 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-01-05 00:44:25.264721 | orchestrator | Request device events from the kernel ----------------------------------- 0.74s 2026-01-05 00:44:25.264730 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.73s 2026-01-05 00:44:25.264738 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-01-05 00:44:25.264747 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-01-05 00:44:25.264756 | orchestrator | Remove all rook related logical devices --------------------------------- 0.47s 2026-01-05 00:44:25.264765 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-01-05 00:44:38.073513 | orchestrator | 2026-01-05 00:44:38 | INFO  | Task 2a6dc8ff-9fd7-471d-aced-a62a36dbb971 (facts) was prepared for execution. 2026-01-05 00:44:38.073613 | orchestrator | 2026-01-05 00:44:38 | INFO  | It takes a moment until task 2a6dc8ff-9fd7-471d-aced-a62a36dbb971 (facts) has been started and output is visible here. 2026-01-05 00:44:51.457822 | orchestrator | 2026-01-05 00:44:51.457923 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 00:44:51.457935 | orchestrator | 2026-01-05 00:44:51.457944 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:44:51.457951 | orchestrator | Monday 05 January 2026 00:44:42 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-01-05 00:44:51.457957 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:51.457965 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:44:51.457971 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:44:51.458007 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:44:51.458070 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:44:51.458077 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:51.458083 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:51.458089 | orchestrator | 2026-01-05 00:44:51.458097 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:44:51.458104 | orchestrator | Monday 05 January 2026 00:44:44 +0000 (0:00:01.213) 0:00:01.508 ******** 2026-01-05 00:44:51.458110 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:44:51.458118 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:51.458124 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:51.458130 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:51.458136 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:51.458142 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:51.458147 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:51.458167 | orchestrator | 2026-01-05 00:44:51.458174 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:44:51.458180 | orchestrator | 2026-01-05 00:44:51.458187 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:44:51.458193 | orchestrator | Monday 05 January 2026 00:44:45 +0000 (0:00:01.464) 0:00:02.972 ******** 2026-01-05 00:44:51.458199 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:44:51.458205 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:44:51.458212 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:44:51.458219 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:51.458225 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:44:51.458232 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:51.458238 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:51.458244 | orchestrator | 2026-01-05 00:44:51.458251 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:44:51.458257 | orchestrator | 2026-01-05 00:44:51.458263 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:44:51.458286 | orchestrator | Monday 05 January 2026 00:44:50 +0000 (0:00:04.800) 0:00:07.773 ******** 2026-01-05 00:44:51.458292 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:44:51.458298 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:51.458303 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:51.458309 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:51.458315 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:51.458321 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:51.458328 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:51.458336 | orchestrator | 2026-01-05 00:44:51.458342 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:44:51.458348 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458357 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458363 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458369 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458376 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458382 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458389 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:51.458396 | orchestrator | 2026-01-05 00:44:51.458413 | orchestrator | 2026-01-05 00:44:51.458434 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:44:51.458441 | orchestrator | Monday 05 January 2026 00:44:50 +0000 (0:00:00.537) 0:00:08.311 ******** 2026-01-05 00:44:51.458448 | orchestrator | =============================================================================== 2026-01-05 00:44:51.458454 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2026-01-05 00:44:51.458461 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2026-01-05 00:44:51.458467 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2026-01-05 00:44:51.458473 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-05 00:44:54.451724 | orchestrator | 2026-01-05 00:44:54 | INFO  | Task 9e5fa10a-14de-41a6-bb68-a62b55cfb592 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-05 00:44:54.451861 | orchestrator | 2026-01-05 00:44:54 | INFO  | It takes a moment until task 9e5fa10a-14de-41a6-bb68-a62b55cfb592 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-05 00:45:06.140722 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:45:06.140884 | orchestrator | 2.16.14 2026-01-05 00:45:06.140894 | orchestrator | 2026-01-05 00:45:06.140902 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:45:06.140909 | orchestrator | 2026-01-05 00:45:06.140918 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:45:06.140925 | orchestrator | Monday 05 January 2026 00:44:59 +0000 (0:00:00.312) 0:00:00.312 ******** 2026-01-05 00:45:06.140932 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:45:06.140938 | orchestrator | 2026-01-05 00:45:06.140944 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:45:06.140950 | orchestrator | Monday 05 January 2026 00:44:59 +0000 (0:00:00.230) 0:00:00.542 ******** 2026-01-05 00:45:06.140956 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:45:06.140962 | orchestrator | 2026-01-05 00:45:06.140968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.140974 | orchestrator | Monday 05 January 2026 00:44:59 +0000 (0:00:00.268) 0:00:00.810 ******** 2026-01-05 00:45:06.140981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:45:06.141007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:45:06.141014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:45:06.141020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:45:06.141025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:45:06.141031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:45:06.141037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:45:06.141043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:45:06.141049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 00:45:06.141055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:45:06.141067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:45:06.141073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:45:06.141082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:45:06.141091 | orchestrator | 2026-01-05 00:45:06.141100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141132 | orchestrator | Monday 05 January 2026 00:44:59 +0000 (0:00:00.446) 0:00:01.257 ******** 2026-01-05 00:45:06.141142 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141152 | orchestrator | 2026-01-05 00:45:06.141160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141170 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:00.193) 0:00:01.450 ******** 2026-01-05 00:45:06.141180 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141189 | orchestrator | 2026-01-05 00:45:06.141199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141209 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:00.175) 0:00:01.626 ******** 2026-01-05 00:45:06.141217 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141223 | orchestrator | 2026-01-05 00:45:06.141229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141239 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:00.207) 0:00:01.833 ******** 2026-01-05 00:45:06.141245 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141251 | orchestrator | 2026-01-05 00:45:06.141256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141262 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:00.211) 0:00:02.045 ******** 2026-01-05 00:45:06.141268 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141274 | orchestrator | 2026-01-05 00:45:06.141280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141286 | orchestrator | Monday 05 January 2026 00:45:00 +0000 (0:00:00.194) 0:00:02.239 ******** 2026-01-05 00:45:06.141319 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141325 | orchestrator | 2026-01-05 00:45:06.141331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141337 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:00.195) 0:00:02.434 ******** 2026-01-05 00:45:06.141385 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141392 | orchestrator | 2026-01-05 00:45:06.141398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141518 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:00.208) 0:00:02.643 ******** 2026-01-05 00:45:06.141528 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141596 | orchestrator | 2026-01-05 00:45:06.141605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141611 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:00.188) 0:00:02.831 ******** 2026-01-05 00:45:06.141617 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f) 2026-01-05 00:45:06.141624 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f) 2026-01-05 00:45:06.141630 | orchestrator | 2026-01-05 00:45:06.141636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141659 | orchestrator | Monday 05 January 2026 00:45:01 +0000 (0:00:00.398) 0:00:03.229 ******** 2026-01-05 00:45:06.141665 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e) 2026-01-05 00:45:06.141671 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e) 2026-01-05 00:45:06.141677 | orchestrator | 2026-01-05 00:45:06.141683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141689 | orchestrator | Monday 05 January 2026 00:45:02 +0000 (0:00:00.584) 0:00:03.814 ******** 2026-01-05 00:45:06.141695 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c) 2026-01-05 00:45:06.141701 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c) 2026-01-05 00:45:06.141707 | orchestrator | 2026-01-05 00:45:06.141712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141728 | orchestrator | Monday 05 January 2026 00:45:03 +0000 (0:00:00.579) 0:00:04.394 ******** 2026-01-05 00:45:06.141733 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725) 2026-01-05 00:45:06.141739 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725) 2026-01-05 00:45:06.141745 | orchestrator | 2026-01-05 00:45:06.141751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:06.141757 | orchestrator | Monday 05 January 2026 00:45:03 +0000 (0:00:00.748) 0:00:05.142 ******** 2026-01-05 00:45:06.141763 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:45:06.141769 | orchestrator | 2026-01-05 00:45:06.141780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141786 | orchestrator | Monday 05 January 2026 00:45:04 +0000 (0:00:00.357) 0:00:05.500 ******** 2026-01-05 00:45:06.141791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:45:06.141797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:45:06.141803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:45:06.141809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:45:06.141814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:45:06.141820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:45:06.141826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:45:06.141832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:45:06.141838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 00:45:06.141843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:45:06.141850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:45:06.141855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:45:06.141861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:45:06.141867 | orchestrator | 2026-01-05 00:45:06.141873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141879 | orchestrator | Monday 05 January 2026 00:45:04 +0000 (0:00:00.409) 0:00:05.909 ******** 2026-01-05 00:45:06.141884 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141890 | orchestrator | 2026-01-05 00:45:06.141896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141902 | orchestrator | Monday 05 January 2026 00:45:04 +0000 (0:00:00.202) 0:00:06.112 ******** 2026-01-05 00:45:06.141908 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141913 | orchestrator | 2026-01-05 00:45:06.141919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141925 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.183) 0:00:06.295 ******** 2026-01-05 00:45:06.141931 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141937 | orchestrator | 2026-01-05 00:45:06.141942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141948 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.219) 0:00:06.515 ******** 2026-01-05 00:45:06.141954 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141960 | orchestrator | 2026-01-05 00:45:06.141966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141971 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.275) 0:00:06.790 ******** 2026-01-05 00:45:06.141982 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.141988 | orchestrator | 2026-01-05 00:45:06.141994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.141999 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.205) 0:00:06.996 ******** 2026-01-05 00:45:06.142005 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.142011 | orchestrator | 2026-01-05 00:45:06.142088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:06.142095 | orchestrator | Monday 05 January 2026 00:45:05 +0000 (0:00:00.212) 0:00:07.209 ******** 2026-01-05 00:45:06.142101 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:06.142107 | orchestrator | 2026-01-05 00:45:06.142117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:13.282371 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:00.195) 0:00:07.404 ******** 2026-01-05 00:45:13.282561 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282574 | orchestrator | 2026-01-05 00:45:13.282582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:13.282615 | orchestrator | Monday 05 January 2026 00:45:06 +0000 (0:00:00.238) 0:00:07.643 ******** 2026-01-05 00:45:13.282621 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 00:45:13.282627 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 00:45:13.282633 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 00:45:13.282639 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 00:45:13.282644 | orchestrator | 2026-01-05 00:45:13.282649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:13.282654 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:00.888) 0:00:08.531 ******** 2026-01-05 00:45:13.282660 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282665 | orchestrator | 2026-01-05 00:45:13.282670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:13.282674 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:00.205) 0:00:08.736 ******** 2026-01-05 00:45:13.282679 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282684 | orchestrator | 2026-01-05 00:45:13.282688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:13.282693 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:00.196) 0:00:08.932 ******** 2026-01-05 00:45:13.282698 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282703 | orchestrator | 2026-01-05 00:45:13.282708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:13.282712 | orchestrator | Monday 05 January 2026 00:45:07 +0000 (0:00:00.209) 0:00:09.142 ******** 2026-01-05 00:45:13.282717 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282721 | orchestrator | 2026-01-05 00:45:13.282726 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:45:13.282731 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.203) 0:00:09.346 ******** 2026-01-05 00:45:13.282735 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:45:13.282740 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:45:13.282745 | orchestrator | 2026-01-05 00:45:13.282766 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:45:13.282771 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.156) 0:00:09.502 ******** 2026-01-05 00:45:13.282775 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282780 | orchestrator | 2026-01-05 00:45:13.282785 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:45:13.282789 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.127) 0:00:09.630 ******** 2026-01-05 00:45:13.282794 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282799 | orchestrator | 2026-01-05 00:45:13.282803 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:45:13.282823 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.132) 0:00:09.763 ******** 2026-01-05 00:45:13.282828 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282833 | orchestrator | 2026-01-05 00:45:13.282837 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:45:13.282842 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.117) 0:00:09.880 ******** 2026-01-05 00:45:13.282847 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:45:13.282852 | orchestrator | 2026-01-05 00:45:13.282856 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:45:13.282861 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.143) 0:00:10.023 ******** 2026-01-05 00:45:13.282866 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5dd43ce6-96bd-500c-b036-3c9652e3f870'}}) 2026-01-05 00:45:13.282871 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f45f623-6f4a-59be-980f-23e900ac5d1d'}}) 2026-01-05 00:45:13.282876 | orchestrator | 2026-01-05 00:45:13.282880 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:45:13.282885 | orchestrator | Monday 05 January 2026 00:45:08 +0000 (0:00:00.173) 0:00:10.197 ******** 2026-01-05 00:45:13.282891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5dd43ce6-96bd-500c-b036-3c9652e3f870'}})  2026-01-05 00:45:13.282901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f45f623-6f4a-59be-980f-23e900ac5d1d'}})  2026-01-05 00:45:13.282907 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282913 | orchestrator | 2026-01-05 00:45:13.282919 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:45:13.282924 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:00.157) 0:00:10.355 ******** 2026-01-05 00:45:13.282930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5dd43ce6-96bd-500c-b036-3c9652e3f870'}})  2026-01-05 00:45:13.282936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f45f623-6f4a-59be-980f-23e900ac5d1d'}})  2026-01-05 00:45:13.282941 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282947 | orchestrator | 2026-01-05 00:45:13.282952 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:45:13.282957 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:00.300) 0:00:10.656 ******** 2026-01-05 00:45:13.282963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5dd43ce6-96bd-500c-b036-3c9652e3f870'}})  2026-01-05 00:45:13.282982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f45f623-6f4a-59be-980f-23e900ac5d1d'}})  2026-01-05 00:45:13.282988 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.282993 | orchestrator | 2026-01-05 00:45:13.282998 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:45:13.283007 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:00.153) 0:00:10.809 ******** 2026-01-05 00:45:13.283012 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:45:13.283018 | orchestrator | 2026-01-05 00:45:13.283023 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:45:13.283028 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:00.115) 0:00:10.925 ******** 2026-01-05 00:45:13.283033 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:45:13.283039 | orchestrator | 2026-01-05 00:45:13.283044 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:45:13.283049 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:00.136) 0:00:11.061 ******** 2026-01-05 00:45:13.283054 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.283059 | orchestrator | 2026-01-05 00:45:13.283065 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:45:13.283070 | orchestrator | Monday 05 January 2026 00:45:09 +0000 (0:00:00.117) 0:00:11.179 ******** 2026-01-05 00:45:13.283080 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.283085 | orchestrator | 2026-01-05 00:45:13.283090 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:45:13.283095 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:00.137) 0:00:11.316 ******** 2026-01-05 00:45:13.283100 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.283106 | orchestrator | 2026-01-05 00:45:13.283111 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:45:13.283117 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:00.129) 0:00:11.445 ******** 2026-01-05 00:45:13.283122 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:45:13.283127 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:45:13.283133 | orchestrator |  "sdb": { 2026-01-05 00:45:13.283139 | orchestrator |  "osd_lvm_uuid": "5dd43ce6-96bd-500c-b036-3c9652e3f870" 2026-01-05 00:45:13.283144 | orchestrator |  }, 2026-01-05 00:45:13.283149 | orchestrator |  "sdc": { 2026-01-05 00:45:13.283154 | orchestrator |  "osd_lvm_uuid": "6f45f623-6f4a-59be-980f-23e900ac5d1d" 2026-01-05 00:45:13.283160 | orchestrator |  } 2026-01-05 00:45:13.283165 | orchestrator |  } 2026-01-05 00:45:13.283171 | orchestrator | } 2026-01-05 00:45:13.283176 | orchestrator | 2026-01-05 00:45:13.283181 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:45:13.283187 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:00.125) 0:00:11.570 ******** 2026-01-05 00:45:13.283192 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.283197 | orchestrator | 2026-01-05 00:45:13.283203 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:45:13.283208 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:00.163) 0:00:11.734 ******** 2026-01-05 00:45:13.283212 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.283217 | orchestrator | 2026-01-05 00:45:13.283222 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:45:13.283226 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:00.122) 0:00:11.856 ******** 2026-01-05 00:45:13.283231 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:13.283236 | orchestrator | 2026-01-05 00:45:13.283240 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:45:13.283245 | orchestrator | Monday 05 January 2026 00:45:10 +0000 (0:00:00.133) 0:00:11.990 ******** 2026-01-05 00:45:13.283249 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 00:45:13.283254 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:45:13.283259 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:45:13.283263 | orchestrator |  "sdb": { 2026-01-05 00:45:13.283268 | orchestrator |  "osd_lvm_uuid": "5dd43ce6-96bd-500c-b036-3c9652e3f870" 2026-01-05 00:45:13.283272 | orchestrator |  }, 2026-01-05 00:45:13.283277 | orchestrator |  "sdc": { 2026-01-05 00:45:13.283282 | orchestrator |  "osd_lvm_uuid": "6f45f623-6f4a-59be-980f-23e900ac5d1d" 2026-01-05 00:45:13.283286 | orchestrator |  } 2026-01-05 00:45:13.283291 | orchestrator |  }, 2026-01-05 00:45:13.283295 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:45:13.283300 | orchestrator |  { 2026-01-05 00:45:13.283305 | orchestrator |  "data": "osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870", 2026-01-05 00:45:13.283310 | orchestrator |  "data_vg": "ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870" 2026-01-05 00:45:13.283314 | orchestrator |  }, 2026-01-05 00:45:13.283319 | orchestrator |  { 2026-01-05 00:45:13.283324 | orchestrator |  "data": "osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d", 2026-01-05 00:45:13.283328 | orchestrator |  "data_vg": "ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d" 2026-01-05 00:45:13.283336 | orchestrator |  } 2026-01-05 00:45:13.283341 | orchestrator |  ] 2026-01-05 00:45:13.283345 | orchestrator |  } 2026-01-05 00:45:13.283354 | orchestrator | } 2026-01-05 00:45:13.283358 | orchestrator | 2026-01-05 00:45:13.283363 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:45:13.283367 | orchestrator | Monday 05 January 2026 00:45:11 +0000 (0:00:00.335) 0:00:12.325 ******** 2026-01-05 00:45:13.283372 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:45:13.283377 | orchestrator | 2026-01-05 00:45:13.283381 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:45:13.283386 | orchestrator | 2026-01-05 00:45:13.283407 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:45:13.283412 | orchestrator | Monday 05 January 2026 00:45:12 +0000 (0:00:01.765) 0:00:14.091 ******** 2026-01-05 00:45:13.283417 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:45:13.283421 | orchestrator | 2026-01-05 00:45:13.283426 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:45:13.283430 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:00.230) 0:00:14.321 ******** 2026-01-05 00:45:13.283435 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:45:13.283440 | orchestrator | 2026-01-05 00:45:13.283447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.317817 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:00.228) 0:00:14.550 ******** 2026-01-05 00:45:21.317941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:45:21.317958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:45:21.317970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:45:21.317981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:45:21.317993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:45:21.318004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:45:21.318085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:45:21.318115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:45:21.318133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 00:45:21.318153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:45:21.318171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:45:21.318197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:45:21.318217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:45:21.318236 | orchestrator | 2026-01-05 00:45:21.318250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318261 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:00.335) 0:00:14.886 ******** 2026-01-05 00:45:21.318272 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318285 | orchestrator | 2026-01-05 00:45:21.318297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318308 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:00.192) 0:00:15.079 ******** 2026-01-05 00:45:21.318319 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318330 | orchestrator | 2026-01-05 00:45:21.318343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318356 | orchestrator | Monday 05 January 2026 00:45:13 +0000 (0:00:00.192) 0:00:15.272 ******** 2026-01-05 00:45:21.318369 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318382 | orchestrator | 2026-01-05 00:45:21.318422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318462 | orchestrator | Monday 05 January 2026 00:45:14 +0000 (0:00:00.204) 0:00:15.476 ******** 2026-01-05 00:45:21.318476 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318489 | orchestrator | 2026-01-05 00:45:21.318501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318512 | orchestrator | Monday 05 January 2026 00:45:14 +0000 (0:00:00.164) 0:00:15.641 ******** 2026-01-05 00:45:21.318523 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318534 | orchestrator | 2026-01-05 00:45:21.318545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318556 | orchestrator | Monday 05 January 2026 00:45:14 +0000 (0:00:00.572) 0:00:16.213 ******** 2026-01-05 00:45:21.318567 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318578 | orchestrator | 2026-01-05 00:45:21.318610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318622 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:00.177) 0:00:16.391 ******** 2026-01-05 00:45:21.318633 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318644 | orchestrator | 2026-01-05 00:45:21.318654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318665 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:00.185) 0:00:16.576 ******** 2026-01-05 00:45:21.318676 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.318687 | orchestrator | 2026-01-05 00:45:21.318698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318708 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:00.207) 0:00:16.784 ******** 2026-01-05 00:45:21.318719 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64) 2026-01-05 00:45:21.318731 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64) 2026-01-05 00:45:21.318742 | orchestrator | 2026-01-05 00:45:21.318753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318764 | orchestrator | Monday 05 January 2026 00:45:15 +0000 (0:00:00.464) 0:00:17.248 ******** 2026-01-05 00:45:21.318775 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a) 2026-01-05 00:45:21.318786 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a) 2026-01-05 00:45:21.318797 | orchestrator | 2026-01-05 00:45:21.318808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318819 | orchestrator | Monday 05 January 2026 00:45:16 +0000 (0:00:00.460) 0:00:17.709 ******** 2026-01-05 00:45:21.318829 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d) 2026-01-05 00:45:21.318840 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d) 2026-01-05 00:45:21.318851 | orchestrator | 2026-01-05 00:45:21.318862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318893 | orchestrator | Monday 05 January 2026 00:45:16 +0000 (0:00:00.454) 0:00:18.163 ******** 2026-01-05 00:45:21.318905 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051) 2026-01-05 00:45:21.318916 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051) 2026-01-05 00:45:21.318928 | orchestrator | 2026-01-05 00:45:21.318938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:21.318949 | orchestrator | Monday 05 January 2026 00:45:17 +0000 (0:00:00.466) 0:00:18.630 ******** 2026-01-05 00:45:21.318960 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:45:21.318971 | orchestrator | 2026-01-05 00:45:21.318982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.318993 | orchestrator | Monday 05 January 2026 00:45:17 +0000 (0:00:00.349) 0:00:18.979 ******** 2026-01-05 00:45:21.319011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:45:21.319022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:45:21.319033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:45:21.319044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:45:21.319055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:45:21.319066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:45:21.319076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:45:21.319087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:45:21.319098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 00:45:21.319108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:45:21.319119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:45:21.319130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:45:21.319141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:45:21.319152 | orchestrator | 2026-01-05 00:45:21.319162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319173 | orchestrator | Monday 05 January 2026 00:45:18 +0000 (0:00:00.419) 0:00:19.399 ******** 2026-01-05 00:45:21.319184 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319195 | orchestrator | 2026-01-05 00:45:21.319205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319221 | orchestrator | Monday 05 January 2026 00:45:18 +0000 (0:00:00.716) 0:00:20.115 ******** 2026-01-05 00:45:21.319233 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319244 | orchestrator | 2026-01-05 00:45:21.319255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319265 | orchestrator | Monday 05 January 2026 00:45:19 +0000 (0:00:00.205) 0:00:20.320 ******** 2026-01-05 00:45:21.319276 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319287 | orchestrator | 2026-01-05 00:45:21.319298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319309 | orchestrator | Monday 05 January 2026 00:45:19 +0000 (0:00:00.188) 0:00:20.509 ******** 2026-01-05 00:45:21.319320 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319331 | orchestrator | 2026-01-05 00:45:21.319342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319353 | orchestrator | Monday 05 January 2026 00:45:19 +0000 (0:00:00.222) 0:00:20.732 ******** 2026-01-05 00:45:21.319363 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319374 | orchestrator | 2026-01-05 00:45:21.319403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319415 | orchestrator | Monday 05 January 2026 00:45:19 +0000 (0:00:00.208) 0:00:20.941 ******** 2026-01-05 00:45:21.319426 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319436 | orchestrator | 2026-01-05 00:45:21.319447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319458 | orchestrator | Monday 05 January 2026 00:45:19 +0000 (0:00:00.211) 0:00:21.152 ******** 2026-01-05 00:45:21.319469 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319480 | orchestrator | 2026-01-05 00:45:21.319490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319501 | orchestrator | Monday 05 January 2026 00:45:20 +0000 (0:00:00.205) 0:00:21.358 ******** 2026-01-05 00:45:21.319519 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:21.319530 | orchestrator | 2026-01-05 00:45:21.319541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319552 | orchestrator | Monday 05 January 2026 00:45:20 +0000 (0:00:00.220) 0:00:21.578 ******** 2026-01-05 00:45:21.319563 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 00:45:21.319576 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 00:45:21.319595 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 00:45:21.319614 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 00:45:21.319639 | orchestrator | 2026-01-05 00:45:21.319660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:21.319678 | orchestrator | Monday 05 January 2026 00:45:21 +0000 (0:00:00.839) 0:00:22.417 ******** 2026-01-05 00:45:21.319695 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358495 | orchestrator | 2026-01-05 00:45:28.358608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:28.358625 | orchestrator | Monday 05 January 2026 00:45:21 +0000 (0:00:00.170) 0:00:22.588 ******** 2026-01-05 00:45:28.358636 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358648 | orchestrator | 2026-01-05 00:45:28.358658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:28.358668 | orchestrator | Monday 05 January 2026 00:45:21 +0000 (0:00:00.174) 0:00:22.763 ******** 2026-01-05 00:45:28.358678 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358687 | orchestrator | 2026-01-05 00:45:28.358697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:28.358707 | orchestrator | Monday 05 January 2026 00:45:21 +0000 (0:00:00.162) 0:00:22.925 ******** 2026-01-05 00:45:28.358716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358726 | orchestrator | 2026-01-05 00:45:28.358736 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:45:28.358745 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.558) 0:00:23.484 ******** 2026-01-05 00:45:28.358755 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:45:28.358765 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:45:28.358774 | orchestrator | 2026-01-05 00:45:28.358784 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:45:28.358794 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.156) 0:00:23.641 ******** 2026-01-05 00:45:28.358803 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358814 | orchestrator | 2026-01-05 00:45:28.358824 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:45:28.358833 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.116) 0:00:23.757 ******** 2026-01-05 00:45:28.358843 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358853 | orchestrator | 2026-01-05 00:45:28.358862 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:45:28.358872 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.116) 0:00:23.874 ******** 2026-01-05 00:45:28.358882 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.358891 | orchestrator | 2026-01-05 00:45:28.358901 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:45:28.358910 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.127) 0:00:24.001 ******** 2026-01-05 00:45:28.358923 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:45:28.358935 | orchestrator | 2026-01-05 00:45:28.358946 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:45:28.358957 | orchestrator | Monday 05 January 2026 00:45:22 +0000 (0:00:00.168) 0:00:24.169 ******** 2026-01-05 00:45:28.358970 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}}) 2026-01-05 00:45:28.358983 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35e03706-0bf5-5720-bc24-6001f60a2be0'}}) 2026-01-05 00:45:28.359021 | orchestrator | 2026-01-05 00:45:28.359031 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:45:28.359041 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.175) 0:00:24.344 ******** 2026-01-05 00:45:28.359051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}})  2026-01-05 00:45:28.359082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35e03706-0bf5-5720-bc24-6001f60a2be0'}})  2026-01-05 00:45:28.359092 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359102 | orchestrator | 2026-01-05 00:45:28.359112 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:45:28.359122 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.150) 0:00:24.495 ******** 2026-01-05 00:45:28.359132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}})  2026-01-05 00:45:28.359142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35e03706-0bf5-5720-bc24-6001f60a2be0'}})  2026-01-05 00:45:28.359151 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359161 | orchestrator | 2026-01-05 00:45:28.359171 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:45:28.359180 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.139) 0:00:24.635 ******** 2026-01-05 00:45:28.359191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}})  2026-01-05 00:45:28.359201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35e03706-0bf5-5720-bc24-6001f60a2be0'}})  2026-01-05 00:45:28.359210 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359220 | orchestrator | 2026-01-05 00:45:28.359230 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:45:28.359239 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.133) 0:00:24.768 ******** 2026-01-05 00:45:28.359249 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:45:28.359259 | orchestrator | 2026-01-05 00:45:28.359268 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:45:28.359278 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.184) 0:00:24.953 ******** 2026-01-05 00:45:28.359287 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:45:28.359297 | orchestrator | 2026-01-05 00:45:28.359306 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:45:28.359316 | orchestrator | Monday 05 January 2026 00:45:23 +0000 (0:00:00.147) 0:00:25.100 ******** 2026-01-05 00:45:28.359344 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359355 | orchestrator | 2026-01-05 00:45:28.359364 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:45:28.359374 | orchestrator | Monday 05 January 2026 00:45:24 +0000 (0:00:00.508) 0:00:25.609 ******** 2026-01-05 00:45:28.359408 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359419 | orchestrator | 2026-01-05 00:45:28.359429 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:45:28.359438 | orchestrator | Monday 05 January 2026 00:45:24 +0000 (0:00:00.151) 0:00:25.760 ******** 2026-01-05 00:45:28.359448 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359458 | orchestrator | 2026-01-05 00:45:28.359467 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:45:28.359477 | orchestrator | Monday 05 January 2026 00:45:24 +0000 (0:00:00.159) 0:00:25.920 ******** 2026-01-05 00:45:28.359487 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:45:28.359496 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:45:28.359506 | orchestrator |  "sdb": { 2026-01-05 00:45:28.359517 | orchestrator |  "osd_lvm_uuid": "bd4e3544-7c7e-58ac-a4cc-590b648d75bf" 2026-01-05 00:45:28.359538 | orchestrator |  }, 2026-01-05 00:45:28.359548 | orchestrator |  "sdc": { 2026-01-05 00:45:28.359557 | orchestrator |  "osd_lvm_uuid": "35e03706-0bf5-5720-bc24-6001f60a2be0" 2026-01-05 00:45:28.359567 | orchestrator |  } 2026-01-05 00:45:28.359577 | orchestrator |  } 2026-01-05 00:45:28.359587 | orchestrator | } 2026-01-05 00:45:28.359597 | orchestrator | 2026-01-05 00:45:28.359607 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:45:28.359616 | orchestrator | Monday 05 January 2026 00:45:24 +0000 (0:00:00.200) 0:00:26.120 ******** 2026-01-05 00:45:28.359626 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359635 | orchestrator | 2026-01-05 00:45:28.359645 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:45:28.359655 | orchestrator | Monday 05 January 2026 00:45:25 +0000 (0:00:00.164) 0:00:26.285 ******** 2026-01-05 00:45:28.359664 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359674 | orchestrator | 2026-01-05 00:45:28.359684 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:45:28.359693 | orchestrator | Monday 05 January 2026 00:45:25 +0000 (0:00:00.160) 0:00:26.445 ******** 2026-01-05 00:45:28.359703 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:28.359712 | orchestrator | 2026-01-05 00:45:28.359722 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:45:28.359738 | orchestrator | Monday 05 January 2026 00:45:25 +0000 (0:00:00.160) 0:00:26.605 ******** 2026-01-05 00:45:28.359755 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 00:45:28.359770 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:45:28.359784 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:45:28.359800 | orchestrator |  "sdb": { 2026-01-05 00:45:28.359815 | orchestrator |  "osd_lvm_uuid": "bd4e3544-7c7e-58ac-a4cc-590b648d75bf" 2026-01-05 00:45:28.359830 | orchestrator |  }, 2026-01-05 00:45:28.359845 | orchestrator |  "sdc": { 2026-01-05 00:45:28.359860 | orchestrator |  "osd_lvm_uuid": "35e03706-0bf5-5720-bc24-6001f60a2be0" 2026-01-05 00:45:28.359878 | orchestrator |  } 2026-01-05 00:45:28.359894 | orchestrator |  }, 2026-01-05 00:45:28.359910 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:45:28.359928 | orchestrator |  { 2026-01-05 00:45:28.359938 | orchestrator |  "data": "osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf", 2026-01-05 00:45:28.359948 | orchestrator |  "data_vg": "ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf" 2026-01-05 00:45:28.359958 | orchestrator |  }, 2026-01-05 00:45:28.359967 | orchestrator |  { 2026-01-05 00:45:28.359977 | orchestrator |  "data": "osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0", 2026-01-05 00:45:28.359986 | orchestrator |  "data_vg": "ceph-35e03706-0bf5-5720-bc24-6001f60a2be0" 2026-01-05 00:45:28.359996 | orchestrator |  } 2026-01-05 00:45:28.360005 | orchestrator |  ] 2026-01-05 00:45:28.360014 | orchestrator |  } 2026-01-05 00:45:28.360024 | orchestrator | } 2026-01-05 00:45:28.360034 | orchestrator | 2026-01-05 00:45:28.360043 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:45:28.360053 | orchestrator | Monday 05 January 2026 00:45:25 +0000 (0:00:00.254) 0:00:26.860 ******** 2026-01-05 00:45:28.360062 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:45:28.360072 | orchestrator | 2026-01-05 00:45:28.360081 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 00:45:28.360091 | orchestrator | 2026-01-05 00:45:28.360100 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:45:28.360110 | orchestrator | Monday 05 January 2026 00:45:26 +0000 (0:00:01.276) 0:00:28.136 ******** 2026-01-05 00:45:28.360119 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:45:28.360129 | orchestrator | 2026-01-05 00:45:28.360139 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:45:28.360164 | orchestrator | Monday 05 January 2026 00:45:27 +0000 (0:00:00.786) 0:00:28.923 ******** 2026-01-05 00:45:28.360175 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:45:28.360185 | orchestrator | 2026-01-05 00:45:28.360194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:28.360204 | orchestrator | Monday 05 January 2026 00:45:27 +0000 (0:00:00.286) 0:00:29.209 ******** 2026-01-05 00:45:28.360213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:45:28.360223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:45:28.360233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:45:28.360242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:45:28.360252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:45:28.360270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:45:37.181470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:45:37.181589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:45:37.181604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 00:45:37.181616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:45:37.181627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:45:37.181638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:45:37.181648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:45:37.181659 | orchestrator | 2026-01-05 00:45:37.181671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181683 | orchestrator | Monday 05 January 2026 00:45:28 +0000 (0:00:00.411) 0:00:29.621 ******** 2026-01-05 00:45:37.181694 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.181706 | orchestrator | 2026-01-05 00:45:37.181717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181728 | orchestrator | Monday 05 January 2026 00:45:28 +0000 (0:00:00.247) 0:00:29.868 ******** 2026-01-05 00:45:37.181739 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.181749 | orchestrator | 2026-01-05 00:45:37.181760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181771 | orchestrator | Monday 05 January 2026 00:45:28 +0000 (0:00:00.208) 0:00:30.077 ******** 2026-01-05 00:45:37.181781 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.181792 | orchestrator | 2026-01-05 00:45:37.181803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181814 | orchestrator | Monday 05 January 2026 00:45:28 +0000 (0:00:00.193) 0:00:30.270 ******** 2026-01-05 00:45:37.181825 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.181836 | orchestrator | 2026-01-05 00:45:37.181847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181857 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:00.212) 0:00:30.482 ******** 2026-01-05 00:45:37.181868 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.181879 | orchestrator | 2026-01-05 00:45:37.181890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181900 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:00.208) 0:00:30.691 ******** 2026-01-05 00:45:37.181911 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.181922 | orchestrator | 2026-01-05 00:45:37.181933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.181972 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:00.210) 0:00:30.902 ******** 2026-01-05 00:45:37.181989 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.182009 | orchestrator | 2026-01-05 00:45:37.182121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.182144 | orchestrator | Monday 05 January 2026 00:45:29 +0000 (0:00:00.203) 0:00:31.105 ******** 2026-01-05 00:45:37.182164 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.182180 | orchestrator | 2026-01-05 00:45:37.182194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.182207 | orchestrator | Monday 05 January 2026 00:45:30 +0000 (0:00:00.266) 0:00:31.371 ******** 2026-01-05 00:45:37.182220 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305) 2026-01-05 00:45:37.182234 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305) 2026-01-05 00:45:37.182246 | orchestrator | 2026-01-05 00:45:37.182259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.182272 | orchestrator | Monday 05 January 2026 00:45:30 +0000 (0:00:00.872) 0:00:32.244 ******** 2026-01-05 00:45:37.182286 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da) 2026-01-05 00:45:37.182298 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da) 2026-01-05 00:45:37.182309 | orchestrator | 2026-01-05 00:45:37.182320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.182330 | orchestrator | Monday 05 January 2026 00:45:31 +0000 (0:00:00.527) 0:00:32.772 ******** 2026-01-05 00:45:37.182341 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf) 2026-01-05 00:45:37.182352 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf) 2026-01-05 00:45:37.182363 | orchestrator | 2026-01-05 00:45:37.182398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.182418 | orchestrator | Monday 05 January 2026 00:45:31 +0000 (0:00:00.413) 0:00:33.185 ******** 2026-01-05 00:45:37.182436 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b) 2026-01-05 00:45:37.182456 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b) 2026-01-05 00:45:37.182474 | orchestrator | 2026-01-05 00:45:37.182486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:45:37.182497 | orchestrator | Monday 05 January 2026 00:45:32 +0000 (0:00:00.453) 0:00:33.639 ******** 2026-01-05 00:45:37.182507 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:45:37.182518 | orchestrator | 2026-01-05 00:45:37.182529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.182560 | orchestrator | Monday 05 January 2026 00:45:32 +0000 (0:00:00.309) 0:00:33.948 ******** 2026-01-05 00:45:37.182571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:45:37.182582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:45:37.182593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:45:37.182603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:45:37.182614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:45:37.182645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:45:37.182657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:45:37.182668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:45:37.182689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 00:45:37.182700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:45:37.182711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:45:37.182721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:45:37.182732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:45:37.182743 | orchestrator | 2026-01-05 00:45:37.182753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.182764 | orchestrator | Monday 05 January 2026 00:45:33 +0000 (0:00:00.376) 0:00:34.325 ******** 2026-01-05 00:45:37.182775 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.182786 | orchestrator | 2026-01-05 00:45:37.182797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.182807 | orchestrator | Monday 05 January 2026 00:45:33 +0000 (0:00:00.744) 0:00:35.069 ******** 2026-01-05 00:45:37.182818 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.182829 | orchestrator | 2026-01-05 00:45:37.182839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.182855 | orchestrator | Monday 05 January 2026 00:45:34 +0000 (0:00:00.309) 0:00:35.379 ******** 2026-01-05 00:45:37.182866 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.182877 | orchestrator | 2026-01-05 00:45:37.182888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.182899 | orchestrator | Monday 05 January 2026 00:45:34 +0000 (0:00:00.238) 0:00:35.618 ******** 2026-01-05 00:45:37.182912 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.182931 | orchestrator | 2026-01-05 00:45:37.182961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.182979 | orchestrator | Monday 05 January 2026 00:45:34 +0000 (0:00:00.194) 0:00:35.812 ******** 2026-01-05 00:45:37.182996 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183013 | orchestrator | 2026-01-05 00:45:37.183031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183048 | orchestrator | Monday 05 January 2026 00:45:34 +0000 (0:00:00.205) 0:00:36.018 ******** 2026-01-05 00:45:37.183065 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183082 | orchestrator | 2026-01-05 00:45:37.183099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183116 | orchestrator | Monday 05 January 2026 00:45:35 +0000 (0:00:00.543) 0:00:36.562 ******** 2026-01-05 00:45:37.183135 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183152 | orchestrator | 2026-01-05 00:45:37.183172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183190 | orchestrator | Monday 05 January 2026 00:45:35 +0000 (0:00:00.208) 0:00:36.770 ******** 2026-01-05 00:45:37.183208 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183224 | orchestrator | 2026-01-05 00:45:37.183235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183246 | orchestrator | Monday 05 January 2026 00:45:35 +0000 (0:00:00.202) 0:00:36.972 ******** 2026-01-05 00:45:37.183257 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 00:45:37.183268 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 00:45:37.183279 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 00:45:37.183290 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 00:45:37.183300 | orchestrator | 2026-01-05 00:45:37.183311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183322 | orchestrator | Monday 05 January 2026 00:45:36 +0000 (0:00:00.648) 0:00:37.620 ******** 2026-01-05 00:45:37.183332 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183354 | orchestrator | 2026-01-05 00:45:37.183365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183435 | orchestrator | Monday 05 January 2026 00:45:36 +0000 (0:00:00.188) 0:00:37.809 ******** 2026-01-05 00:45:37.183448 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183459 | orchestrator | 2026-01-05 00:45:37.183470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183481 | orchestrator | Monday 05 January 2026 00:45:36 +0000 (0:00:00.173) 0:00:37.982 ******** 2026-01-05 00:45:37.183491 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183502 | orchestrator | 2026-01-05 00:45:37.183513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:45:37.183524 | orchestrator | Monday 05 January 2026 00:45:36 +0000 (0:00:00.171) 0:00:38.154 ******** 2026-01-05 00:45:37.183535 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:37.183546 | orchestrator | 2026-01-05 00:45:37.183567 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 00:45:40.857453 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.209) 0:00:38.363 ******** 2026-01-05 00:45:40.857599 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-05 00:45:40.857618 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-05 00:45:40.857629 | orchestrator | 2026-01-05 00:45:40.857640 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 00:45:40.857651 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.136) 0:00:38.499 ******** 2026-01-05 00:45:40.857661 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.857671 | orchestrator | 2026-01-05 00:45:40.857681 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 00:45:40.857690 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.094) 0:00:38.594 ******** 2026-01-05 00:45:40.857700 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.857709 | orchestrator | 2026-01-05 00:45:40.857719 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 00:45:40.857729 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.095) 0:00:38.689 ******** 2026-01-05 00:45:40.857738 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.857748 | orchestrator | 2026-01-05 00:45:40.857757 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 00:45:40.857767 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.241) 0:00:38.931 ******** 2026-01-05 00:45:40.857777 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:45:40.857787 | orchestrator | 2026-01-05 00:45:40.857815 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 00:45:40.857826 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.098) 0:00:39.029 ******** 2026-01-05 00:45:40.857836 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2726894-ebb3-5d48-8b2e-e077f444c4ac'}}) 2026-01-05 00:45:40.857847 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edc09b40-6ec9-59c0-95b4-baacc31b5a92'}}) 2026-01-05 00:45:40.857856 | orchestrator | 2026-01-05 00:45:40.857866 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 00:45:40.857875 | orchestrator | Monday 05 January 2026 00:45:37 +0000 (0:00:00.113) 0:00:39.143 ******** 2026-01-05 00:45:40.857886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2726894-ebb3-5d48-8b2e-e077f444c4ac'}})  2026-01-05 00:45:40.857898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edc09b40-6ec9-59c0-95b4-baacc31b5a92'}})  2026-01-05 00:45:40.857910 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.857922 | orchestrator | 2026-01-05 00:45:40.857933 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 00:45:40.857944 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.133) 0:00:39.277 ******** 2026-01-05 00:45:40.857983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2726894-ebb3-5d48-8b2e-e077f444c4ac'}})  2026-01-05 00:45:40.857995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edc09b40-6ec9-59c0-95b4-baacc31b5a92'}})  2026-01-05 00:45:40.858006 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858093 | orchestrator | 2026-01-05 00:45:40.858105 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 00:45:40.858116 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.155) 0:00:39.432 ******** 2026-01-05 00:45:40.858147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2726894-ebb3-5d48-8b2e-e077f444c4ac'}})  2026-01-05 00:45:40.858158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edc09b40-6ec9-59c0-95b4-baacc31b5a92'}})  2026-01-05 00:45:40.858167 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858177 | orchestrator | 2026-01-05 00:45:40.858187 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 00:45:40.858196 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.160) 0:00:39.592 ******** 2026-01-05 00:45:40.858206 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:45:40.858216 | orchestrator | 2026-01-05 00:45:40.858225 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 00:45:40.858235 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.182) 0:00:39.775 ******** 2026-01-05 00:45:40.858244 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:45:40.858254 | orchestrator | 2026-01-05 00:45:40.858263 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 00:45:40.858273 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.141) 0:00:39.917 ******** 2026-01-05 00:45:40.858282 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858292 | orchestrator | 2026-01-05 00:45:40.858301 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 00:45:40.858311 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.111) 0:00:40.028 ******** 2026-01-05 00:45:40.858320 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858330 | orchestrator | 2026-01-05 00:45:40.858362 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 00:45:40.858396 | orchestrator | Monday 05 January 2026 00:45:38 +0000 (0:00:00.138) 0:00:40.166 ******** 2026-01-05 00:45:40.858407 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858417 | orchestrator | 2026-01-05 00:45:40.858427 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 00:45:40.858436 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.137) 0:00:40.304 ******** 2026-01-05 00:45:40.858446 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:45:40.858455 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:45:40.858465 | orchestrator |  "sdb": { 2026-01-05 00:45:40.858497 | orchestrator |  "osd_lvm_uuid": "f2726894-ebb3-5d48-8b2e-e077f444c4ac" 2026-01-05 00:45:40.858508 | orchestrator |  }, 2026-01-05 00:45:40.858518 | orchestrator |  "sdc": { 2026-01-05 00:45:40.858528 | orchestrator |  "osd_lvm_uuid": "edc09b40-6ec9-59c0-95b4-baacc31b5a92" 2026-01-05 00:45:40.858538 | orchestrator |  } 2026-01-05 00:45:40.858548 | orchestrator |  } 2026-01-05 00:45:40.858562 | orchestrator | } 2026-01-05 00:45:40.858579 | orchestrator | 2026-01-05 00:45:40.858596 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 00:45:40.858612 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.133) 0:00:40.438 ******** 2026-01-05 00:45:40.858628 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858643 | orchestrator | 2026-01-05 00:45:40.858658 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 00:45:40.858674 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.340) 0:00:40.778 ******** 2026-01-05 00:45:40.858708 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858726 | orchestrator | 2026-01-05 00:45:40.858741 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 00:45:40.858759 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.122) 0:00:40.900 ******** 2026-01-05 00:45:40.858776 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:40.858793 | orchestrator | 2026-01-05 00:45:40.858809 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 00:45:40.858825 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.127) 0:00:41.028 ******** 2026-01-05 00:45:40.858842 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 00:45:40.858858 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 00:45:40.858875 | orchestrator |  "ceph_osd_devices": { 2026-01-05 00:45:40.858892 | orchestrator |  "sdb": { 2026-01-05 00:45:40.858909 | orchestrator |  "osd_lvm_uuid": "f2726894-ebb3-5d48-8b2e-e077f444c4ac" 2026-01-05 00:45:40.858925 | orchestrator |  }, 2026-01-05 00:45:40.858943 | orchestrator |  "sdc": { 2026-01-05 00:45:40.858956 | orchestrator |  "osd_lvm_uuid": "edc09b40-6ec9-59c0-95b4-baacc31b5a92" 2026-01-05 00:45:40.858966 | orchestrator |  } 2026-01-05 00:45:40.858976 | orchestrator |  }, 2026-01-05 00:45:40.858985 | orchestrator |  "lvm_volumes": [ 2026-01-05 00:45:40.858995 | orchestrator |  { 2026-01-05 00:45:40.859005 | orchestrator |  "data": "osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac", 2026-01-05 00:45:40.859014 | orchestrator |  "data_vg": "ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac" 2026-01-05 00:45:40.859024 | orchestrator |  }, 2026-01-05 00:45:40.859034 | orchestrator |  { 2026-01-05 00:45:40.859043 | orchestrator |  "data": "osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92", 2026-01-05 00:45:40.859053 | orchestrator |  "data_vg": "ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92" 2026-01-05 00:45:40.859063 | orchestrator |  } 2026-01-05 00:45:40.859076 | orchestrator |  ] 2026-01-05 00:45:40.859086 | orchestrator |  } 2026-01-05 00:45:40.859096 | orchestrator | } 2026-01-05 00:45:40.859105 | orchestrator | 2026-01-05 00:45:40.859115 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 00:45:40.859124 | orchestrator | Monday 05 January 2026 00:45:39 +0000 (0:00:00.202) 0:00:41.231 ******** 2026-01-05 00:45:40.859134 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:45:40.859143 | orchestrator | 2026-01-05 00:45:40.859153 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:45:40.859163 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:45:40.859174 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:45:40.859184 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 00:45:40.859193 | orchestrator | 2026-01-05 00:45:40.859203 | orchestrator | 2026-01-05 00:45:40.859212 | orchestrator | 2026-01-05 00:45:40.859221 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:45:40.859231 | orchestrator | Monday 05 January 2026 00:45:40 +0000 (0:00:00.880) 0:00:42.111 ******** 2026-01-05 00:45:40.859240 | orchestrator | =============================================================================== 2026-01-05 00:45:40.859250 | orchestrator | Write configuration file ------------------------------------------------ 3.92s 2026-01-05 00:45:40.859259 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.25s 2026-01-05 00:45:40.859269 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-01-05 00:45:40.859278 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-01-05 00:45:40.859296 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-01-05 00:45:40.859306 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-01-05 00:45:40.859315 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-01-05 00:45:40.859325 | orchestrator | Print configuration data ------------------------------------------------ 0.79s 2026-01-05 00:45:40.859335 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2026-01-05 00:45:40.859344 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-01-05 00:45:40.859353 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-01-05 00:45:40.859363 | orchestrator | Set DB devices config data ---------------------------------------------- 0.74s 2026-01-05 00:45:40.859402 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-01-05 00:45:40.859423 | orchestrator | Print WAL devices ------------------------------------------------------- 0.67s 2026-01-05 00:45:41.223413 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-01-05 00:45:41.223560 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2026-01-05 00:45:41.223587 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-01-05 00:45:41.223608 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-01-05 00:45:41.223627 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-01-05 00:45:41.223639 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-01-05 00:46:03.862627 | orchestrator | 2026-01-05 00:46:03 | INFO  | Task efba31e5-2b54-4b7d-a492-3b7b9f00029a (sync inventory) is running in background. Output coming soon. 2026-01-05 00:46:32.251071 | orchestrator | 2026-01-05 00:46:05 | INFO  | Starting group_vars file reorganization 2026-01-05 00:46:32.251181 | orchestrator | 2026-01-05 00:46:05 | INFO  | Moved 0 file(s) to their respective directories 2026-01-05 00:46:32.251192 | orchestrator | 2026-01-05 00:46:05 | INFO  | Group_vars file reorganization completed 2026-01-05 00:46:32.251199 | orchestrator | 2026-01-05 00:46:08 | INFO  | Starting variable preparation from inventory 2026-01-05 00:46:32.251205 | orchestrator | 2026-01-05 00:46:11 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-05 00:46:32.251212 | orchestrator | 2026-01-05 00:46:11 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-05 00:46:32.251244 | orchestrator | 2026-01-05 00:46:11 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-05 00:46:32.251252 | orchestrator | 2026-01-05 00:46:11 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-05 00:46:32.251259 | orchestrator | 2026-01-05 00:46:11 | INFO  | Variable preparation completed 2026-01-05 00:46:32.251266 | orchestrator | 2026-01-05 00:46:12 | INFO  | Starting inventory overwrite handling 2026-01-05 00:46:32.251276 | orchestrator | 2026-01-05 00:46:12 | INFO  | Handling group overwrites in 99-overwrite 2026-01-05 00:46:32.251283 | orchestrator | 2026-01-05 00:46:12 | INFO  | Removing group frr:children from 60-generic 2026-01-05 00:46:32.251289 | orchestrator | 2026-01-05 00:46:12 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-05 00:46:32.251295 | orchestrator | 2026-01-05 00:46:12 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-05 00:46:32.251302 | orchestrator | 2026-01-05 00:46:12 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-05 00:46:32.251308 | orchestrator | 2026-01-05 00:46:12 | INFO  | Handling group overwrites in 20-roles 2026-01-05 00:46:32.251383 | orchestrator | 2026-01-05 00:46:12 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-05 00:46:32.251391 | orchestrator | 2026-01-05 00:46:12 | INFO  | Removed 5 group(s) in total 2026-01-05 00:46:32.251397 | orchestrator | 2026-01-05 00:46:12 | INFO  | Inventory overwrite handling completed 2026-01-05 00:46:32.251403 | orchestrator | 2026-01-05 00:46:13 | INFO  | Starting merge of inventory files 2026-01-05 00:46:32.251409 | orchestrator | 2026-01-05 00:46:13 | INFO  | Inventory files merged successfully 2026-01-05 00:46:32.251415 | orchestrator | 2026-01-05 00:46:19 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-05 00:46:32.251420 | orchestrator | 2026-01-05 00:46:30 | INFO  | Successfully wrote ClusterShell configuration 2026-01-05 00:46:32.251426 | orchestrator | [master 1559b3f] 2026-01-05-00-46 2026-01-05 00:46:32.251433 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-05 00:46:34.188635 | orchestrator | 2026-01-05 00:46:34 | INFO  | Task bf533d50-c8b9-480d-be52-0df56e2c0be1 (ceph-create-lvm-devices) was prepared for execution. 2026-01-05 00:46:34.188692 | orchestrator | 2026-01-05 00:46:34 | INFO  | It takes a moment until task bf533d50-c8b9-480d-be52-0df56e2c0be1 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-05 00:46:46.259064 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 00:46:46.259154 | orchestrator | 2.16.14 2026-01-05 00:46:46.259164 | orchestrator | 2026-01-05 00:46:46.259171 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:46:46.259178 | orchestrator | 2026-01-05 00:46:46.259184 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:46:46.259191 | orchestrator | Monday 05 January 2026 00:46:38 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-01-05 00:46:46.259197 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 00:46:46.259203 | orchestrator | 2026-01-05 00:46:46.259208 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:46:46.259214 | orchestrator | Monday 05 January 2026 00:46:38 +0000 (0:00:00.290) 0:00:00.587 ******** 2026-01-05 00:46:46.259219 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:46.259225 | orchestrator | 2026-01-05 00:46:46.259232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259237 | orchestrator | Monday 05 January 2026 00:46:38 +0000 (0:00:00.249) 0:00:00.836 ******** 2026-01-05 00:46:46.259242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:46:46.259248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:46:46.259253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:46:46.259258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:46:46.259263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:46:46.259268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:46:46.259273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:46:46.259278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:46:46.259284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 00:46:46.259289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:46:46.259294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:46:46.259299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:46:46.259344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:46:46.259351 | orchestrator | 2026-01-05 00:46:46.259356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259361 | orchestrator | Monday 05 January 2026 00:46:39 +0000 (0:00:00.560) 0:00:01.396 ******** 2026-01-05 00:46:46.259366 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259372 | orchestrator | 2026-01-05 00:46:46.259377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259382 | orchestrator | Monday 05 January 2026 00:46:39 +0000 (0:00:00.202) 0:00:01.598 ******** 2026-01-05 00:46:46.259387 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259392 | orchestrator | 2026-01-05 00:46:46.259397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259403 | orchestrator | Monday 05 January 2026 00:46:39 +0000 (0:00:00.264) 0:00:01.863 ******** 2026-01-05 00:46:46.259408 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259413 | orchestrator | 2026-01-05 00:46:46.259418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259423 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:00.241) 0:00:02.104 ******** 2026-01-05 00:46:46.259428 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259433 | orchestrator | 2026-01-05 00:46:46.259439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259444 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:00.279) 0:00:02.384 ******** 2026-01-05 00:46:46.259449 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259454 | orchestrator | 2026-01-05 00:46:46.259459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259464 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:00.293) 0:00:02.677 ******** 2026-01-05 00:46:46.259469 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259474 | orchestrator | 2026-01-05 00:46:46.259479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259484 | orchestrator | Monday 05 January 2026 00:46:40 +0000 (0:00:00.259) 0:00:02.936 ******** 2026-01-05 00:46:46.259489 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259495 | orchestrator | 2026-01-05 00:46:46.259500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259505 | orchestrator | Monday 05 January 2026 00:46:41 +0000 (0:00:00.228) 0:00:03.165 ******** 2026-01-05 00:46:46.259510 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259515 | orchestrator | 2026-01-05 00:46:46.259520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259525 | orchestrator | Monday 05 January 2026 00:46:41 +0000 (0:00:00.237) 0:00:03.402 ******** 2026-01-05 00:46:46.259530 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f) 2026-01-05 00:46:46.259537 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f) 2026-01-05 00:46:46.259542 | orchestrator | 2026-01-05 00:46:46.259547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259565 | orchestrator | Monday 05 January 2026 00:46:42 +0000 (0:00:00.555) 0:00:03.958 ******** 2026-01-05 00:46:46.259571 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e) 2026-01-05 00:46:46.259576 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e) 2026-01-05 00:46:46.259581 | orchestrator | 2026-01-05 00:46:46.259586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259591 | orchestrator | Monday 05 January 2026 00:46:42 +0000 (0:00:00.674) 0:00:04.633 ******** 2026-01-05 00:46:46.259596 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c) 2026-01-05 00:46:46.259606 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c) 2026-01-05 00:46:46.259613 | orchestrator | 2026-01-05 00:46:46.259619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259625 | orchestrator | Monday 05 January 2026 00:46:43 +0000 (0:00:00.601) 0:00:05.234 ******** 2026-01-05 00:46:46.259631 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725) 2026-01-05 00:46:46.259636 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725) 2026-01-05 00:46:46.259642 | orchestrator | 2026-01-05 00:46:46.259648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:46:46.259654 | orchestrator | Monday 05 January 2026 00:46:44 +0000 (0:00:00.720) 0:00:05.955 ******** 2026-01-05 00:46:46.259660 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:46:46.259708 | orchestrator | 2026-01-05 00:46:46.259715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259722 | orchestrator | Monday 05 January 2026 00:46:44 +0000 (0:00:00.336) 0:00:06.291 ******** 2026-01-05 00:46:46.259728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 00:46:46.259734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 00:46:46.259740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 00:46:46.259761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 00:46:46.259767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 00:46:46.259773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 00:46:46.259779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 00:46:46.259785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 00:46:46.259791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 00:46:46.259797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 00:46:46.259803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 00:46:46.259813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 00:46:46.259819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 00:46:46.259825 | orchestrator | 2026-01-05 00:46:46.259831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259837 | orchestrator | Monday 05 January 2026 00:46:44 +0000 (0:00:00.453) 0:00:06.744 ******** 2026-01-05 00:46:46.259843 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259848 | orchestrator | 2026-01-05 00:46:46.259854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259861 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.203) 0:00:06.948 ******** 2026-01-05 00:46:46.259867 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259873 | orchestrator | 2026-01-05 00:46:46.259879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259884 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.227) 0:00:07.175 ******** 2026-01-05 00:46:46.259890 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259896 | orchestrator | 2026-01-05 00:46:46.259902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259908 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.199) 0:00:07.375 ******** 2026-01-05 00:46:46.259913 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259923 | orchestrator | 2026-01-05 00:46:46.259929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259935 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.219) 0:00:07.595 ******** 2026-01-05 00:46:46.259942 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259948 | orchestrator | 2026-01-05 00:46:46.259954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259959 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.206) 0:00:07.801 ******** 2026-01-05 00:46:46.259966 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259971 | orchestrator | 2026-01-05 00:46:46.259977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:46.259982 | orchestrator | Monday 05 January 2026 00:46:46 +0000 (0:00:00.185) 0:00:07.986 ******** 2026-01-05 00:46:46.259987 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:46.259992 | orchestrator | 2026-01-05 00:46:46.260002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:54.375049 | orchestrator | Monday 05 January 2026 00:46:46 +0000 (0:00:00.199) 0:00:08.185 ******** 2026-01-05 00:46:54.375195 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.375212 | orchestrator | 2026-01-05 00:46:54.375224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:54.375235 | orchestrator | Monday 05 January 2026 00:46:46 +0000 (0:00:00.223) 0:00:08.409 ******** 2026-01-05 00:46:54.375246 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 00:46:54.375256 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 00:46:54.376085 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 00:46:54.376176 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 00:46:54.376189 | orchestrator | 2026-01-05 00:46:54.376199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:54.376210 | orchestrator | Monday 05 January 2026 00:46:47 +0000 (0:00:00.923) 0:00:09.332 ******** 2026-01-05 00:46:54.376220 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376229 | orchestrator | 2026-01-05 00:46:54.376239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:54.376249 | orchestrator | Monday 05 January 2026 00:46:47 +0000 (0:00:00.207) 0:00:09.540 ******** 2026-01-05 00:46:54.376259 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376268 | orchestrator | 2026-01-05 00:46:54.376278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:54.376288 | orchestrator | Monday 05 January 2026 00:46:47 +0000 (0:00:00.212) 0:00:09.753 ******** 2026-01-05 00:46:54.376298 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376351 | orchestrator | 2026-01-05 00:46:54.376370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:46:54.376387 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.195) 0:00:09.948 ******** 2026-01-05 00:46:54.376402 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376420 | orchestrator | 2026-01-05 00:46:54.376436 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:46:54.376451 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.211) 0:00:10.159 ******** 2026-01-05 00:46:54.376467 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376481 | orchestrator | 2026-01-05 00:46:54.376498 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:46:54.376515 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.134) 0:00:10.294 ******** 2026-01-05 00:46:54.376533 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5dd43ce6-96bd-500c-b036-3c9652e3f870'}}) 2026-01-05 00:46:54.376550 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f45f623-6f4a-59be-980f-23e900ac5d1d'}}) 2026-01-05 00:46:54.376567 | orchestrator | 2026-01-05 00:46:54.376583 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:46:54.376635 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.180) 0:00:10.474 ******** 2026-01-05 00:46:54.376652 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'}) 2026-01-05 00:46:54.376672 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'}) 2026-01-05 00:46:54.376688 | orchestrator | 2026-01-05 00:46:54.376705 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:46:54.376721 | orchestrator | Monday 05 January 2026 00:46:50 +0000 (0:00:02.015) 0:00:12.490 ******** 2026-01-05 00:46:54.376737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.376755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.376772 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376788 | orchestrator | 2026-01-05 00:46:54.376805 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:46:54.376821 | orchestrator | Monday 05 January 2026 00:46:50 +0000 (0:00:00.179) 0:00:12.669 ******** 2026-01-05 00:46:54.376838 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'}) 2026-01-05 00:46:54.376849 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'}) 2026-01-05 00:46:54.376859 | orchestrator | 2026-01-05 00:46:54.376869 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:46:54.376879 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:01.608) 0:00:14.278 ******** 2026-01-05 00:46:54.376888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.376898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.376970 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.376988 | orchestrator | 2026-01-05 00:46:54.377003 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:46:54.377019 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:00.162) 0:00:14.440 ******** 2026-01-05 00:46:54.377061 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377078 | orchestrator | 2026-01-05 00:46:54.377092 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:46:54.377106 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:00.122) 0:00:14.563 ******** 2026-01-05 00:46:54.377121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.377136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.377152 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377170 | orchestrator | 2026-01-05 00:46:54.377187 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:46:54.377203 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.424) 0:00:14.987 ******** 2026-01-05 00:46:54.377220 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377236 | orchestrator | 2026-01-05 00:46:54.377246 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:46:54.377256 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.151) 0:00:15.138 ******** 2026-01-05 00:46:54.377278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.377288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.377298 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377342 | orchestrator | 2026-01-05 00:46:54.377361 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:46:54.377379 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.139) 0:00:15.278 ******** 2026-01-05 00:46:54.377393 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377411 | orchestrator | 2026-01-05 00:46:54.377425 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:46:54.377436 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.133) 0:00:15.412 ******** 2026-01-05 00:46:54.377445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.377455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.377465 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377475 | orchestrator | 2026-01-05 00:46:54.377484 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:46:54.377494 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.159) 0:00:15.571 ******** 2026-01-05 00:46:54.377503 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:54.377513 | orchestrator | 2026-01-05 00:46:54.377523 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:46:54.377554 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.140) 0:00:15.712 ******** 2026-01-05 00:46:54.377569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.377579 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.377588 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377598 | orchestrator | 2026-01-05 00:46:54.377607 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:46:54.377617 | orchestrator | Monday 05 January 2026 00:46:53 +0000 (0:00:00.143) 0:00:15.855 ******** 2026-01-05 00:46:54.377627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.377636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.377646 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377655 | orchestrator | 2026-01-05 00:46:54.377665 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:46:54.377675 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.148) 0:00:16.004 ******** 2026-01-05 00:46:54.377684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:46:54.377694 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:46:54.377704 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377713 | orchestrator | 2026-01-05 00:46:54.377723 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:46:54.377746 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.168) 0:00:16.172 ******** 2026-01-05 00:46:54.377762 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:54.377778 | orchestrator | 2026-01-05 00:46:54.377794 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:46:54.377824 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.130) 0:00:16.303 ******** 2026-01-05 00:47:01.581000 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581149 | orchestrator | 2026-01-05 00:47:01.581165 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:47:01.581175 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.151) 0:00:16.455 ******** 2026-01-05 00:47:01.581183 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581192 | orchestrator | 2026-01-05 00:47:01.581200 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:47:01.581208 | orchestrator | Monday 05 January 2026 00:46:54 +0000 (0:00:00.148) 0:00:16.603 ******** 2026-01-05 00:47:01.581216 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:47:01.581225 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:47:01.581233 | orchestrator | } 2026-01-05 00:47:01.581243 | orchestrator | 2026-01-05 00:47:01.581248 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:47:01.581253 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:00.397) 0:00:17.000 ******** 2026-01-05 00:47:01.581258 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:47:01.581262 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:47:01.581267 | orchestrator | } 2026-01-05 00:47:01.581271 | orchestrator | 2026-01-05 00:47:01.581276 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:47:01.581280 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:00.145) 0:00:17.146 ******** 2026-01-05 00:47:01.581286 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:47:01.581291 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:47:01.581295 | orchestrator | } 2026-01-05 00:47:01.581345 | orchestrator | 2026-01-05 00:47:01.581352 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:47:01.581356 | orchestrator | Monday 05 January 2026 00:46:55 +0000 (0:00:00.158) 0:00:17.304 ******** 2026-01-05 00:47:01.581361 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:01.581365 | orchestrator | 2026-01-05 00:47:01.581370 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:47:01.581374 | orchestrator | Monday 05 January 2026 00:46:56 +0000 (0:00:00.699) 0:00:18.003 ******** 2026-01-05 00:47:01.581379 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:01.581384 | orchestrator | 2026-01-05 00:47:01.581388 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:47:01.581393 | orchestrator | Monday 05 January 2026 00:46:56 +0000 (0:00:00.574) 0:00:18.577 ******** 2026-01-05 00:47:01.581397 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:01.581402 | orchestrator | 2026-01-05 00:47:01.581406 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:47:01.581411 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:00.583) 0:00:19.161 ******** 2026-01-05 00:47:01.581415 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:01.581420 | orchestrator | 2026-01-05 00:47:01.581424 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:47:01.581429 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:00.183) 0:00:19.345 ******** 2026-01-05 00:47:01.581433 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581438 | orchestrator | 2026-01-05 00:47:01.581442 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:47:01.581447 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:00.137) 0:00:19.482 ******** 2026-01-05 00:47:01.581451 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581456 | orchestrator | 2026-01-05 00:47:01.581460 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:47:01.581508 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:00.138) 0:00:19.620 ******** 2026-01-05 00:47:01.581518 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:47:01.581526 | orchestrator |  "vgs_report": { 2026-01-05 00:47:01.581535 | orchestrator |  "vg": [] 2026-01-05 00:47:01.581543 | orchestrator |  } 2026-01-05 00:47:01.581550 | orchestrator | } 2026-01-05 00:47:01.581558 | orchestrator | 2026-01-05 00:47:01.581570 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:47:01.581578 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:00.154) 0:00:19.775 ******** 2026-01-05 00:47:01.581586 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581599 | orchestrator | 2026-01-05 00:47:01.581614 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:47:01.581623 | orchestrator | Monday 05 January 2026 00:46:57 +0000 (0:00:00.148) 0:00:19.924 ******** 2026-01-05 00:47:01.581631 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581639 | orchestrator | 2026-01-05 00:47:01.581652 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:47:01.581667 | orchestrator | Monday 05 January 2026 00:46:58 +0000 (0:00:00.166) 0:00:20.091 ******** 2026-01-05 00:47:01.581681 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581690 | orchestrator | 2026-01-05 00:47:01.581701 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:47:01.581709 | orchestrator | Monday 05 January 2026 00:46:58 +0000 (0:00:00.391) 0:00:20.482 ******** 2026-01-05 00:47:01.581718 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581733 | orchestrator | 2026-01-05 00:47:01.581745 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:47:01.581755 | orchestrator | Monday 05 January 2026 00:46:58 +0000 (0:00:00.143) 0:00:20.626 ******** 2026-01-05 00:47:01.581768 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581782 | orchestrator | 2026-01-05 00:47:01.581793 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:47:01.581805 | orchestrator | Monday 05 January 2026 00:46:58 +0000 (0:00:00.143) 0:00:20.769 ******** 2026-01-05 00:47:01.581817 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581832 | orchestrator | 2026-01-05 00:47:01.581841 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:47:01.581849 | orchestrator | Monday 05 January 2026 00:46:58 +0000 (0:00:00.158) 0:00:20.927 ******** 2026-01-05 00:47:01.581857 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581864 | orchestrator | 2026-01-05 00:47:01.581873 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:47:01.581881 | orchestrator | Monday 05 January 2026 00:46:59 +0000 (0:00:00.146) 0:00:21.074 ******** 2026-01-05 00:47:01.581911 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581921 | orchestrator | 2026-01-05 00:47:01.581928 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:47:01.581936 | orchestrator | Monday 05 January 2026 00:46:59 +0000 (0:00:00.156) 0:00:21.231 ******** 2026-01-05 00:47:01.581943 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581950 | orchestrator | 2026-01-05 00:47:01.581958 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:47:01.581966 | orchestrator | Monday 05 January 2026 00:46:59 +0000 (0:00:00.151) 0:00:21.383 ******** 2026-01-05 00:47:01.581974 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.581982 | orchestrator | 2026-01-05 00:47:01.581988 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:47:01.581993 | orchestrator | Monday 05 January 2026 00:46:59 +0000 (0:00:00.167) 0:00:21.551 ******** 2026-01-05 00:47:01.581998 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582002 | orchestrator | 2026-01-05 00:47:01.582006 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:47:01.582011 | orchestrator | Monday 05 January 2026 00:46:59 +0000 (0:00:00.190) 0:00:21.741 ******** 2026-01-05 00:47:01.582083 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582093 | orchestrator | 2026-01-05 00:47:01.582102 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:47:01.582110 | orchestrator | Monday 05 January 2026 00:46:59 +0000 (0:00:00.142) 0:00:21.884 ******** 2026-01-05 00:47:01.582118 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582126 | orchestrator | 2026-01-05 00:47:01.582134 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:47:01.582143 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:00.149) 0:00:22.033 ******** 2026-01-05 00:47:01.582150 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582159 | orchestrator | 2026-01-05 00:47:01.582165 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:47:01.582172 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:00.143) 0:00:22.177 ******** 2026-01-05 00:47:01.582181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:01.582190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:01.582197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582204 | orchestrator | 2026-01-05 00:47:01.582211 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:47:01.582219 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:00.421) 0:00:22.598 ******** 2026-01-05 00:47:01.582226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:01.582233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:01.582240 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582248 | orchestrator | 2026-01-05 00:47:01.582255 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:47:01.582263 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:00.168) 0:00:22.767 ******** 2026-01-05 00:47:01.582270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:01.582277 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:01.582284 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582292 | orchestrator | 2026-01-05 00:47:01.582298 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:47:01.582323 | orchestrator | Monday 05 January 2026 00:47:01 +0000 (0:00:00.208) 0:00:22.975 ******** 2026-01-05 00:47:01.582330 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:01.582337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:01.582345 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582352 | orchestrator | 2026-01-05 00:47:01.582360 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:47:01.582368 | orchestrator | Monday 05 January 2026 00:47:01 +0000 (0:00:00.202) 0:00:23.177 ******** 2026-01-05 00:47:01.582376 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:01.582383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:01.582400 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:01.582408 | orchestrator | 2026-01-05 00:47:01.582416 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:47:01.582433 | orchestrator | Monday 05 January 2026 00:47:01 +0000 (0:00:00.163) 0:00:23.340 ******** 2026-01-05 00:47:01.582450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:08.093250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:08.093388 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:08.093400 | orchestrator | 2026-01-05 00:47:08.093407 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:47:08.093414 | orchestrator | Monday 05 January 2026 00:47:01 +0000 (0:00:00.172) 0:00:23.512 ******** 2026-01-05 00:47:08.093419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:08.093425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:08.093431 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:08.093436 | orchestrator | 2026-01-05 00:47:08.093441 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:47:08.093446 | orchestrator | Monday 05 January 2026 00:47:01 +0000 (0:00:00.196) 0:00:23.709 ******** 2026-01-05 00:47:08.093451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:08.093456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:08.093461 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:08.093466 | orchestrator | 2026-01-05 00:47:08.093471 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:47:08.093476 | orchestrator | Monday 05 January 2026 00:47:01 +0000 (0:00:00.173) 0:00:23.883 ******** 2026-01-05 00:47:08.093481 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:08.093487 | orchestrator | 2026-01-05 00:47:08.093491 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:47:08.093496 | orchestrator | Monday 05 January 2026 00:47:02 +0000 (0:00:00.590) 0:00:24.474 ******** 2026-01-05 00:47:08.093501 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:08.093506 | orchestrator | 2026-01-05 00:47:08.093511 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:47:08.093516 | orchestrator | Monday 05 January 2026 00:47:03 +0000 (0:00:00.672) 0:00:25.147 ******** 2026-01-05 00:47:08.093521 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:47:08.093525 | orchestrator | 2026-01-05 00:47:08.093530 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:47:08.093535 | orchestrator | Monday 05 January 2026 00:47:03 +0000 (0:00:00.187) 0:00:25.334 ******** 2026-01-05 00:47:08.093540 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'vg_name': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'}) 2026-01-05 00:47:08.093562 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'vg_name': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'}) 2026-01-05 00:47:08.093567 | orchestrator | 2026-01-05 00:47:08.093572 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:47:08.093577 | orchestrator | Monday 05 January 2026 00:47:03 +0000 (0:00:00.241) 0:00:25.576 ******** 2026-01-05 00:47:08.093599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:08.093605 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:08.093609 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:08.093614 | orchestrator | 2026-01-05 00:47:08.093619 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:47:08.093624 | orchestrator | Monday 05 January 2026 00:47:04 +0000 (0:00:00.465) 0:00:26.041 ******** 2026-01-05 00:47:08.093629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:08.093634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:08.093638 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:08.093644 | orchestrator | 2026-01-05 00:47:08.093648 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:47:08.093653 | orchestrator | Monday 05 January 2026 00:47:04 +0000 (0:00:00.202) 0:00:26.244 ******** 2026-01-05 00:47:08.093658 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'})  2026-01-05 00:47:08.093663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'})  2026-01-05 00:47:08.093668 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:47:08.093673 | orchestrator | 2026-01-05 00:47:08.093677 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:47:08.093682 | orchestrator | Monday 05 January 2026 00:47:04 +0000 (0:00:00.201) 0:00:26.446 ******** 2026-01-05 00:47:08.093699 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 00:47:08.093705 | orchestrator |  "lvm_report": { 2026-01-05 00:47:08.093710 | orchestrator |  "lv": [ 2026-01-05 00:47:08.093715 | orchestrator |  { 2026-01-05 00:47:08.093720 | orchestrator |  "lv_name": "osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870", 2026-01-05 00:47:08.093726 | orchestrator |  "vg_name": "ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870" 2026-01-05 00:47:08.093731 | orchestrator |  }, 2026-01-05 00:47:08.093736 | orchestrator |  { 2026-01-05 00:47:08.093741 | orchestrator |  "lv_name": "osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d", 2026-01-05 00:47:08.093745 | orchestrator |  "vg_name": "ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d" 2026-01-05 00:47:08.093750 | orchestrator |  } 2026-01-05 00:47:08.093755 | orchestrator |  ], 2026-01-05 00:47:08.093760 | orchestrator |  "pv": [ 2026-01-05 00:47:08.093765 | orchestrator |  { 2026-01-05 00:47:08.093770 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:47:08.093774 | orchestrator |  "vg_name": "ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870" 2026-01-05 00:47:08.093779 | orchestrator |  }, 2026-01-05 00:47:08.093785 | orchestrator |  { 2026-01-05 00:47:08.093791 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:47:08.093799 | orchestrator |  "vg_name": "ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d" 2026-01-05 00:47:08.093807 | orchestrator |  } 2026-01-05 00:47:08.093815 | orchestrator |  ] 2026-01-05 00:47:08.093823 | orchestrator |  } 2026-01-05 00:47:08.093831 | orchestrator | } 2026-01-05 00:47:08.093839 | orchestrator | 2026-01-05 00:47:08.093846 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:47:08.093853 | orchestrator | 2026-01-05 00:47:08.093861 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:47:08.093877 | orchestrator | Monday 05 January 2026 00:47:04 +0000 (0:00:00.384) 0:00:26.830 ******** 2026-01-05 00:47:08.093885 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 00:47:08.093893 | orchestrator | 2026-01-05 00:47:08.093902 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:47:08.093910 | orchestrator | Monday 05 January 2026 00:47:05 +0000 (0:00:00.336) 0:00:27.167 ******** 2026-01-05 00:47:08.093915 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:08.093921 | orchestrator | 2026-01-05 00:47:08.093927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.093933 | orchestrator | Monday 05 January 2026 00:47:05 +0000 (0:00:00.272) 0:00:27.440 ******** 2026-01-05 00:47:08.093939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:47:08.093945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:47:08.093950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:47:08.093956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:47:08.093962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:47:08.093968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:47:08.093978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:47:08.093984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:47:08.093990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 00:47:08.093995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:47:08.094001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:47:08.094007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:47:08.094054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:47:08.094062 | orchestrator | 2026-01-05 00:47:08.094068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.094074 | orchestrator | Monday 05 January 2026 00:47:06 +0000 (0:00:00.578) 0:00:28.019 ******** 2026-01-05 00:47:08.094080 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:08.094086 | orchestrator | 2026-01-05 00:47:08.094092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.094098 | orchestrator | Monday 05 January 2026 00:47:06 +0000 (0:00:00.240) 0:00:28.259 ******** 2026-01-05 00:47:08.094104 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:08.094110 | orchestrator | 2026-01-05 00:47:08.094116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.094122 | orchestrator | Monday 05 January 2026 00:47:06 +0000 (0:00:00.248) 0:00:28.507 ******** 2026-01-05 00:47:08.094129 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:08.094134 | orchestrator | 2026-01-05 00:47:08.094139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.094143 | orchestrator | Monday 05 January 2026 00:47:07 +0000 (0:00:00.789) 0:00:29.297 ******** 2026-01-05 00:47:08.094148 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:08.094153 | orchestrator | 2026-01-05 00:47:08.094158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.094163 | orchestrator | Monday 05 January 2026 00:47:07 +0000 (0:00:00.249) 0:00:29.547 ******** 2026-01-05 00:47:08.094167 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:08.094172 | orchestrator | 2026-01-05 00:47:08.094177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:08.094191 | orchestrator | Monday 05 January 2026 00:47:07 +0000 (0:00:00.224) 0:00:29.771 ******** 2026-01-05 00:47:08.094196 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:08.094201 | orchestrator | 2026-01-05 00:47:08.094211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.856215 | orchestrator | Monday 05 January 2026 00:47:08 +0000 (0:00:00.250) 0:00:30.021 ******** 2026-01-05 00:47:20.856385 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.856401 | orchestrator | 2026-01-05 00:47:20.856409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.856417 | orchestrator | Monday 05 January 2026 00:47:08 +0000 (0:00:00.240) 0:00:30.261 ******** 2026-01-05 00:47:20.856424 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.856431 | orchestrator | 2026-01-05 00:47:20.856437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.856444 | orchestrator | Monday 05 January 2026 00:47:08 +0000 (0:00:00.206) 0:00:30.468 ******** 2026-01-05 00:47:20.856450 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64) 2026-01-05 00:47:20.856459 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64) 2026-01-05 00:47:20.856465 | orchestrator | 2026-01-05 00:47:20.856471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.856478 | orchestrator | Monday 05 January 2026 00:47:09 +0000 (0:00:00.485) 0:00:30.953 ******** 2026-01-05 00:47:20.856483 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a) 2026-01-05 00:47:20.856490 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a) 2026-01-05 00:47:20.856496 | orchestrator | 2026-01-05 00:47:20.856502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.856509 | orchestrator | Monday 05 January 2026 00:47:09 +0000 (0:00:00.508) 0:00:31.461 ******** 2026-01-05 00:47:20.856515 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d) 2026-01-05 00:47:20.856521 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d) 2026-01-05 00:47:20.856527 | orchestrator | 2026-01-05 00:47:20.856533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.856538 | orchestrator | Monday 05 January 2026 00:47:09 +0000 (0:00:00.472) 0:00:31.934 ******** 2026-01-05 00:47:20.856544 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051) 2026-01-05 00:47:20.857132 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051) 2026-01-05 00:47:20.857147 | orchestrator | 2026-01-05 00:47:20.857154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:20.857161 | orchestrator | Monday 05 January 2026 00:47:10 +0000 (0:00:00.716) 0:00:32.650 ******** 2026-01-05 00:47:20.857168 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:47:20.857174 | orchestrator | 2026-01-05 00:47:20.857180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857187 | orchestrator | Monday 05 January 2026 00:47:11 +0000 (0:00:00.710) 0:00:33.361 ******** 2026-01-05 00:47:20.857193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 00:47:20.857201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 00:47:20.857208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 00:47:20.857214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 00:47:20.857221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 00:47:20.857334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 00:47:20.857344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 00:47:20.857351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 00:47:20.857357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 00:47:20.857363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 00:47:20.857369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 00:47:20.857375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 00:47:20.857380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 00:47:20.857386 | orchestrator | 2026-01-05 00:47:20.857392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857398 | orchestrator | Monday 05 January 2026 00:47:12 +0000 (0:00:01.012) 0:00:34.373 ******** 2026-01-05 00:47:20.857404 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857411 | orchestrator | 2026-01-05 00:47:20.857417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857423 | orchestrator | Monday 05 January 2026 00:47:12 +0000 (0:00:00.250) 0:00:34.624 ******** 2026-01-05 00:47:20.857430 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857435 | orchestrator | 2026-01-05 00:47:20.857441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857448 | orchestrator | Monday 05 January 2026 00:47:12 +0000 (0:00:00.210) 0:00:34.834 ******** 2026-01-05 00:47:20.857455 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857461 | orchestrator | 2026-01-05 00:47:20.857490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857530 | orchestrator | Monday 05 January 2026 00:47:13 +0000 (0:00:00.308) 0:00:35.143 ******** 2026-01-05 00:47:20.857536 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857543 | orchestrator | 2026-01-05 00:47:20.857549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857555 | orchestrator | Monday 05 January 2026 00:47:13 +0000 (0:00:00.304) 0:00:35.447 ******** 2026-01-05 00:47:20.857561 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857568 | orchestrator | 2026-01-05 00:47:20.857574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857581 | orchestrator | Monday 05 January 2026 00:47:13 +0000 (0:00:00.259) 0:00:35.707 ******** 2026-01-05 00:47:20.857587 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857593 | orchestrator | 2026-01-05 00:47:20.857921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857934 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:00.251) 0:00:35.958 ******** 2026-01-05 00:47:20.857941 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857948 | orchestrator | 2026-01-05 00:47:20.857955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857961 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:00.308) 0:00:36.267 ******** 2026-01-05 00:47:20.857967 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.857972 | orchestrator | 2026-01-05 00:47:20.857978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.857984 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:00.243) 0:00:36.510 ******** 2026-01-05 00:47:20.857991 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 00:47:20.857997 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 00:47:20.858005 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 00:47:20.858011 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 00:47:20.858106 | orchestrator | 2026-01-05 00:47:20.858115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.858122 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.990) 0:00:37.500 ******** 2026-01-05 00:47:20.858256 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.858265 | orchestrator | 2026-01-05 00:47:20.858271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.858295 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.227) 0:00:37.728 ******** 2026-01-05 00:47:20.858302 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.858308 | orchestrator | 2026-01-05 00:47:20.858314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.858321 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.909) 0:00:38.637 ******** 2026-01-05 00:47:20.858328 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.858334 | orchestrator | 2026-01-05 00:47:20.858341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:20.858348 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.190) 0:00:38.828 ******** 2026-01-05 00:47:20.858354 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.858360 | orchestrator | 2026-01-05 00:47:20.858367 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:47:20.858381 | orchestrator | Monday 05 January 2026 00:47:17 +0000 (0:00:00.191) 0:00:39.019 ******** 2026-01-05 00:47:20.858387 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.858393 | orchestrator | 2026-01-05 00:47:20.858400 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:47:20.858406 | orchestrator | Monday 05 January 2026 00:47:17 +0000 (0:00:00.159) 0:00:39.178 ******** 2026-01-05 00:47:20.858413 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}}) 2026-01-05 00:47:20.858420 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35e03706-0bf5-5720-bc24-6001f60a2be0'}}) 2026-01-05 00:47:20.858427 | orchestrator | 2026-01-05 00:47:20.858433 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:47:20.858439 | orchestrator | Monday 05 January 2026 00:47:17 +0000 (0:00:00.183) 0:00:39.361 ******** 2026-01-05 00:47:20.858447 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}) 2026-01-05 00:47:20.858455 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'}) 2026-01-05 00:47:20.858461 | orchestrator | 2026-01-05 00:47:20.858468 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:47:20.858473 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:01.929) 0:00:41.291 ******** 2026-01-05 00:47:20.858479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:20.858487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:20.858494 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:20.858500 | orchestrator | 2026-01-05 00:47:20.858507 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:47:20.858513 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.147) 0:00:41.439 ******** 2026-01-05 00:47:20.858519 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}) 2026-01-05 00:47:20.858538 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'}) 2026-01-05 00:47:27.173139 | orchestrator | 2026-01-05 00:47:27.173248 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:47:27.173261 | orchestrator | Monday 05 January 2026 00:47:20 +0000 (0:00:01.345) 0:00:42.784 ******** 2026-01-05 00:47:27.173269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173361 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173369 | orchestrator | 2026-01-05 00:47:27.173376 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:47:27.173382 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.201) 0:00:42.985 ******** 2026-01-05 00:47:27.173389 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173396 | orchestrator | 2026-01-05 00:47:27.173402 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:47:27.173408 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.116) 0:00:43.102 ******** 2026-01-05 00:47:27.173415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173427 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173433 | orchestrator | 2026-01-05 00:47:27.173439 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:47:27.173445 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.139) 0:00:43.242 ******** 2026-01-05 00:47:27.173452 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173458 | orchestrator | 2026-01-05 00:47:27.173465 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:47:27.173471 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.150) 0:00:43.392 ******** 2026-01-05 00:47:27.173477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173489 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173495 | orchestrator | 2026-01-05 00:47:27.173501 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:47:27.173529 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.310) 0:00:43.702 ******** 2026-01-05 00:47:27.173535 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173542 | orchestrator | 2026-01-05 00:47:27.173548 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:47:27.173554 | orchestrator | Monday 05 January 2026 00:47:21 +0000 (0:00:00.143) 0:00:43.846 ******** 2026-01-05 00:47:27.173560 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173573 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173579 | orchestrator | 2026-01-05 00:47:27.173586 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:47:27.173593 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.129) 0:00:43.976 ******** 2026-01-05 00:47:27.173600 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:27.173635 | orchestrator | 2026-01-05 00:47:27.173643 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:47:27.173649 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.162) 0:00:44.138 ******** 2026-01-05 00:47:27.173656 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173669 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173676 | orchestrator | 2026-01-05 00:47:27.173682 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:47:27.173689 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.161) 0:00:44.299 ******** 2026-01-05 00:47:27.173696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173742 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173749 | orchestrator | 2026-01-05 00:47:27.173757 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:47:27.173784 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.168) 0:00:44.468 ******** 2026-01-05 00:47:27.173791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:27.173798 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:27.173805 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173811 | orchestrator | 2026-01-05 00:47:27.173818 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:47:27.173824 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.163) 0:00:44.632 ******** 2026-01-05 00:47:27.173918 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173926 | orchestrator | 2026-01-05 00:47:27.173933 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:47:27.173940 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.149) 0:00:44.781 ******** 2026-01-05 00:47:27.173965 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.173972 | orchestrator | 2026-01-05 00:47:27.173979 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:47:27.174002 | orchestrator | Monday 05 January 2026 00:47:22 +0000 (0:00:00.146) 0:00:44.928 ******** 2026-01-05 00:47:27.174009 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174110 | orchestrator | 2026-01-05 00:47:27.174122 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:47:27.174129 | orchestrator | Monday 05 January 2026 00:47:23 +0000 (0:00:00.173) 0:00:45.101 ******** 2026-01-05 00:47:27.174136 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:47:27.174142 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:47:27.174148 | orchestrator | } 2026-01-05 00:47:27.174155 | orchestrator | 2026-01-05 00:47:27.174162 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:47:27.174169 | orchestrator | Monday 05 January 2026 00:47:23 +0000 (0:00:00.195) 0:00:45.297 ******** 2026-01-05 00:47:27.174176 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:47:27.174182 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:47:27.174189 | orchestrator | } 2026-01-05 00:47:27.174195 | orchestrator | 2026-01-05 00:47:27.174201 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:47:27.174209 | orchestrator | Monday 05 January 2026 00:47:23 +0000 (0:00:00.157) 0:00:45.454 ******** 2026-01-05 00:47:27.174226 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:47:27.174233 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:47:27.174240 | orchestrator | } 2026-01-05 00:47:27.174246 | orchestrator | 2026-01-05 00:47:27.174252 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:47:27.174257 | orchestrator | Monday 05 January 2026 00:47:24 +0000 (0:00:00.578) 0:00:46.032 ******** 2026-01-05 00:47:27.174263 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:27.174269 | orchestrator | 2026-01-05 00:47:27.174294 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:47:27.174301 | orchestrator | Monday 05 January 2026 00:47:24 +0000 (0:00:00.603) 0:00:46.636 ******** 2026-01-05 00:47:27.174306 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:27.174312 | orchestrator | 2026-01-05 00:47:27.174318 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:47:27.174324 | orchestrator | Monday 05 January 2026 00:47:25 +0000 (0:00:00.652) 0:00:47.288 ******** 2026-01-05 00:47:27.174330 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:27.174336 | orchestrator | 2026-01-05 00:47:27.174342 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:47:27.174348 | orchestrator | Monday 05 January 2026 00:47:25 +0000 (0:00:00.558) 0:00:47.847 ******** 2026-01-05 00:47:27.174354 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:27.174360 | orchestrator | 2026-01-05 00:47:27.174365 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:47:27.174371 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.171) 0:00:48.018 ******** 2026-01-05 00:47:27.174377 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174384 | orchestrator | 2026-01-05 00:47:27.174401 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:47:27.174408 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.146) 0:00:48.164 ******** 2026-01-05 00:47:27.174414 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174420 | orchestrator | 2026-01-05 00:47:27.174427 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:47:27.174433 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.135) 0:00:48.300 ******** 2026-01-05 00:47:27.174439 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:47:27.174445 | orchestrator |  "vgs_report": { 2026-01-05 00:47:27.174452 | orchestrator |  "vg": [] 2026-01-05 00:47:27.174459 | orchestrator |  } 2026-01-05 00:47:27.174465 | orchestrator | } 2026-01-05 00:47:27.174472 | orchestrator | 2026-01-05 00:47:27.174478 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:47:27.174485 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.173) 0:00:48.473 ******** 2026-01-05 00:47:27.174491 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174497 | orchestrator | 2026-01-05 00:47:27.174504 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:47:27.174510 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.154) 0:00:48.628 ******** 2026-01-05 00:47:27.174516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174523 | orchestrator | 2026-01-05 00:47:27.174529 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:47:27.174536 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.155) 0:00:48.784 ******** 2026-01-05 00:47:27.174542 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174548 | orchestrator | 2026-01-05 00:47:27.174554 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:47:27.174561 | orchestrator | Monday 05 January 2026 00:47:26 +0000 (0:00:00.152) 0:00:48.937 ******** 2026-01-05 00:47:27.174567 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:27.174573 | orchestrator | 2026-01-05 00:47:27.174590 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:47:32.543711 | orchestrator | Monday 05 January 2026 00:47:27 +0000 (0:00:00.166) 0:00:49.103 ******** 2026-01-05 00:47:32.543817 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543825 | orchestrator | 2026-01-05 00:47:32.543831 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:47:32.543836 | orchestrator | Monday 05 January 2026 00:47:27 +0000 (0:00:00.472) 0:00:49.576 ******** 2026-01-05 00:47:32.543841 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543845 | orchestrator | 2026-01-05 00:47:32.543850 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:47:32.543854 | orchestrator | Monday 05 January 2026 00:47:27 +0000 (0:00:00.155) 0:00:49.732 ******** 2026-01-05 00:47:32.543859 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543863 | orchestrator | 2026-01-05 00:47:32.543867 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:47:32.543872 | orchestrator | Monday 05 January 2026 00:47:27 +0000 (0:00:00.177) 0:00:49.909 ******** 2026-01-05 00:47:32.543876 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543880 | orchestrator | 2026-01-05 00:47:32.543885 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:47:32.543889 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:00.145) 0:00:50.055 ******** 2026-01-05 00:47:32.543893 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543898 | orchestrator | 2026-01-05 00:47:32.543902 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:47:32.543907 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:00.166) 0:00:50.221 ******** 2026-01-05 00:47:32.543911 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543917 | orchestrator | 2026-01-05 00:47:32.543924 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:47:32.543932 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:00.153) 0:00:50.375 ******** 2026-01-05 00:47:32.543942 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543951 | orchestrator | 2026-01-05 00:47:32.543957 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:47:32.543964 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:00.154) 0:00:50.529 ******** 2026-01-05 00:47:32.543971 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.543977 | orchestrator | 2026-01-05 00:47:32.543984 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:47:32.543990 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:00.157) 0:00:50.687 ******** 2026-01-05 00:47:32.543997 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544004 | orchestrator | 2026-01-05 00:47:32.544011 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:47:32.544017 | orchestrator | Monday 05 January 2026 00:47:28 +0000 (0:00:00.165) 0:00:50.853 ******** 2026-01-05 00:47:32.544025 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544032 | orchestrator | 2026-01-05 00:47:32.544039 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:47:32.544061 | orchestrator | Monday 05 January 2026 00:47:29 +0000 (0:00:00.157) 0:00:51.010 ******** 2026-01-05 00:47:32.544067 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544083 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544087 | orchestrator | 2026-01-05 00:47:32.544091 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:47:32.544096 | orchestrator | Monday 05 January 2026 00:47:29 +0000 (0:00:00.179) 0:00:51.189 ******** 2026-01-05 00:47:32.544100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544119 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544125 | orchestrator | 2026-01-05 00:47:32.544132 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:47:32.544139 | orchestrator | Monday 05 January 2026 00:47:29 +0000 (0:00:00.157) 0:00:51.347 ******** 2026-01-05 00:47:32.544146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544153 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544161 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544168 | orchestrator | 2026-01-05 00:47:32.544175 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:47:32.544181 | orchestrator | Monday 05 January 2026 00:47:29 +0000 (0:00:00.429) 0:00:51.777 ******** 2026-01-05 00:47:32.544186 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544198 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544205 | orchestrator | 2026-01-05 00:47:32.544229 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:47:32.544237 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:00.180) 0:00:51.957 ******** 2026-01-05 00:47:32.544244 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544260 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544290 | orchestrator | 2026-01-05 00:47:32.544296 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:47:32.544302 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:00.208) 0:00:52.166 ******** 2026-01-05 00:47:32.544308 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544323 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544330 | orchestrator | 2026-01-05 00:47:32.544337 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:47:32.544344 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:00.184) 0:00:52.350 ******** 2026-01-05 00:47:32.544351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544368 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544375 | orchestrator | 2026-01-05 00:47:32.544382 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:47:32.544389 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:00.180) 0:00:52.531 ******** 2026-01-05 00:47:32.544404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544424 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544431 | orchestrator | 2026-01-05 00:47:32.544438 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:47:32.544445 | orchestrator | Monday 05 January 2026 00:47:30 +0000 (0:00:00.169) 0:00:52.700 ******** 2026-01-05 00:47:32.544452 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:32.544456 | orchestrator | 2026-01-05 00:47:32.544461 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:47:32.544465 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:00.515) 0:00:53.216 ******** 2026-01-05 00:47:32.544469 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:32.544474 | orchestrator | 2026-01-05 00:47:32.544479 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:47:32.544483 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:00.506) 0:00:53.723 ******** 2026-01-05 00:47:32.544488 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:47:32.544492 | orchestrator | 2026-01-05 00:47:32.544496 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:47:32.544500 | orchestrator | Monday 05 January 2026 00:47:31 +0000 (0:00:00.176) 0:00:53.899 ******** 2026-01-05 00:47:32.544505 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'vg_name': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'}) 2026-01-05 00:47:32.544511 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'vg_name': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}) 2026-01-05 00:47:32.544516 | orchestrator | 2026-01-05 00:47:32.544520 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:47:32.544524 | orchestrator | Monday 05 January 2026 00:47:32 +0000 (0:00:00.195) 0:00:54.095 ******** 2026-01-05 00:47:32.544530 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:32.544544 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:32.544548 | orchestrator | 2026-01-05 00:47:32.544552 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:47:32.544557 | orchestrator | Monday 05 January 2026 00:47:32 +0000 (0:00:00.183) 0:00:54.278 ******** 2026-01-05 00:47:32.544561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:32.544571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:39.429617 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:39.429742 | orchestrator | 2026-01-05 00:47:39.429756 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:47:39.429766 | orchestrator | Monday 05 January 2026 00:47:32 +0000 (0:00:00.192) 0:00:54.471 ******** 2026-01-05 00:47:39.429772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'})  2026-01-05 00:47:39.429778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'})  2026-01-05 00:47:39.429782 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:47:39.429807 | orchestrator | 2026-01-05 00:47:39.429811 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:47:39.429815 | orchestrator | Monday 05 January 2026 00:47:32 +0000 (0:00:00.153) 0:00:54.624 ******** 2026-01-05 00:47:39.429820 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 00:47:39.429824 | orchestrator |  "lvm_report": { 2026-01-05 00:47:39.429830 | orchestrator |  "lv": [ 2026-01-05 00:47:39.429834 | orchestrator |  { 2026-01-05 00:47:39.429840 | orchestrator |  "lv_name": "osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0", 2026-01-05 00:47:39.429848 | orchestrator |  "vg_name": "ceph-35e03706-0bf5-5720-bc24-6001f60a2be0" 2026-01-05 00:47:39.429854 | orchestrator |  }, 2026-01-05 00:47:39.429860 | orchestrator |  { 2026-01-05 00:47:39.429866 | orchestrator |  "lv_name": "osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf", 2026-01-05 00:47:39.429872 | orchestrator |  "vg_name": "ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf" 2026-01-05 00:47:39.429879 | orchestrator |  } 2026-01-05 00:47:39.429884 | orchestrator |  ], 2026-01-05 00:47:39.429895 | orchestrator |  "pv": [ 2026-01-05 00:47:39.429906 | orchestrator |  { 2026-01-05 00:47:39.429918 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:47:39.429924 | orchestrator |  "vg_name": "ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf" 2026-01-05 00:47:39.429930 | orchestrator |  }, 2026-01-05 00:47:39.429937 | orchestrator |  { 2026-01-05 00:47:39.429943 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:47:39.429954 | orchestrator |  "vg_name": "ceph-35e03706-0bf5-5720-bc24-6001f60a2be0" 2026-01-05 00:47:39.429962 | orchestrator |  } 2026-01-05 00:47:39.429969 | orchestrator |  ] 2026-01-05 00:47:39.429976 | orchestrator |  } 2026-01-05 00:47:39.429983 | orchestrator | } 2026-01-05 00:47:39.429989 | orchestrator | 2026-01-05 00:47:39.429996 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 00:47:39.430002 | orchestrator | 2026-01-05 00:47:39.430008 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 00:47:39.430082 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:00.585) 0:00:55.210 ******** 2026-01-05 00:47:39.430097 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 00:47:39.430104 | orchestrator | 2026-01-05 00:47:39.430112 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 00:47:39.430118 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:00.321) 0:00:55.531 ******** 2026-01-05 00:47:39.430127 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:39.430134 | orchestrator | 2026-01-05 00:47:39.430141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430148 | orchestrator | Monday 05 January 2026 00:47:33 +0000 (0:00:00.251) 0:00:55.783 ******** 2026-01-05 00:47:39.430154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:47:39.430161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:47:39.430168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:47:39.430177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:47:39.430184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:47:39.430190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:47:39.430197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:47:39.430207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:47:39.430218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 00:47:39.430234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:47:39.430241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:47:39.430249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:47:39.430256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:47:39.430284 | orchestrator | 2026-01-05 00:47:39.430294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430300 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:00.450) 0:00:56.234 ******** 2026-01-05 00:47:39.430306 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430311 | orchestrator | 2026-01-05 00:47:39.430317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430324 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:00.206) 0:00:56.441 ******** 2026-01-05 00:47:39.430329 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430335 | orchestrator | 2026-01-05 00:47:39.430341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430373 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:00.248) 0:00:56.689 ******** 2026-01-05 00:47:39.430381 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430388 | orchestrator | 2026-01-05 00:47:39.430394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430401 | orchestrator | Monday 05 January 2026 00:47:34 +0000 (0:00:00.204) 0:00:56.894 ******** 2026-01-05 00:47:39.430408 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430416 | orchestrator | 2026-01-05 00:47:39.430422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430487 | orchestrator | Monday 05 January 2026 00:47:35 +0000 (0:00:00.237) 0:00:57.132 ******** 2026-01-05 00:47:39.430499 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430507 | orchestrator | 2026-01-05 00:47:39.430515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430522 | orchestrator | Monday 05 January 2026 00:47:35 +0000 (0:00:00.780) 0:00:57.912 ******** 2026-01-05 00:47:39.430529 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430536 | orchestrator | 2026-01-05 00:47:39.430543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430550 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:00.216) 0:00:58.129 ******** 2026-01-05 00:47:39.430557 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430564 | orchestrator | 2026-01-05 00:47:39.430571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430578 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:00.234) 0:00:58.363 ******** 2026-01-05 00:47:39.430584 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:39.430590 | orchestrator | 2026-01-05 00:47:39.430596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430602 | orchestrator | Monday 05 January 2026 00:47:36 +0000 (0:00:00.225) 0:00:58.589 ******** 2026-01-05 00:47:39.430609 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305) 2026-01-05 00:47:39.430618 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305) 2026-01-05 00:47:39.430626 | orchestrator | 2026-01-05 00:47:39.430632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430637 | orchestrator | Monday 05 January 2026 00:47:37 +0000 (0:00:00.457) 0:00:59.047 ******** 2026-01-05 00:47:39.430644 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da) 2026-01-05 00:47:39.430650 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da) 2026-01-05 00:47:39.430655 | orchestrator | 2026-01-05 00:47:39.430670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430684 | orchestrator | Monday 05 January 2026 00:47:37 +0000 (0:00:00.491) 0:00:59.539 ******** 2026-01-05 00:47:39.430691 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf) 2026-01-05 00:47:39.430698 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf) 2026-01-05 00:47:39.430705 | orchestrator | 2026-01-05 00:47:39.430711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430717 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:00.494) 0:01:00.033 ******** 2026-01-05 00:47:39.430723 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b) 2026-01-05 00:47:39.430730 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b) 2026-01-05 00:47:39.430736 | orchestrator | 2026-01-05 00:47:39.430743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 00:47:39.430750 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:00.489) 0:01:00.523 ******** 2026-01-05 00:47:39.430757 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 00:47:39.430763 | orchestrator | 2026-01-05 00:47:39.430769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:39.430775 | orchestrator | Monday 05 January 2026 00:47:38 +0000 (0:00:00.390) 0:01:00.913 ******** 2026-01-05 00:47:39.430783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 00:47:39.430789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 00:47:39.430795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 00:47:39.430801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 00:47:39.430810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 00:47:39.430817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 00:47:39.430824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 00:47:39.430831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 00:47:39.430837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 00:47:39.430843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 00:47:39.430849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 00:47:39.430870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 00:47:49.375885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 00:47:49.375961 | orchestrator | 2026-01-05 00:47:49.375968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.375972 | orchestrator | Monday 05 January 2026 00:47:39 +0000 (0:00:00.438) 0:01:01.351 ******** 2026-01-05 00:47:49.375976 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.375981 | orchestrator | 2026-01-05 00:47:49.375985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.375989 | orchestrator | Monday 05 January 2026 00:47:39 +0000 (0:00:00.214) 0:01:01.566 ******** 2026-01-05 00:47:49.375993 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.375997 | orchestrator | 2026-01-05 00:47:49.376003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376010 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:00.855) 0:01:02.421 ******** 2026-01-05 00:47:49.376035 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376039 | orchestrator | 2026-01-05 00:47:49.376043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376047 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:00.243) 0:01:02.665 ******** 2026-01-05 00:47:49.376051 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376055 | orchestrator | 2026-01-05 00:47:49.376058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376062 | orchestrator | Monday 05 January 2026 00:47:40 +0000 (0:00:00.216) 0:01:02.882 ******** 2026-01-05 00:47:49.376066 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376070 | orchestrator | 2026-01-05 00:47:49.376074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376079 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.243) 0:01:03.125 ******** 2026-01-05 00:47:49.376085 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376090 | orchestrator | 2026-01-05 00:47:49.376094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376098 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.254) 0:01:03.380 ******** 2026-01-05 00:47:49.376101 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376105 | orchestrator | 2026-01-05 00:47:49.376109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376115 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.230) 0:01:03.611 ******** 2026-01-05 00:47:49.376120 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376124 | orchestrator | 2026-01-05 00:47:49.376128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376132 | orchestrator | Monday 05 January 2026 00:47:41 +0000 (0:00:00.212) 0:01:03.823 ******** 2026-01-05 00:47:49.376144 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 00:47:49.376148 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 00:47:49.376158 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 00:47:49.376162 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 00:47:49.376167 | orchestrator | 2026-01-05 00:47:49.376174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376178 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.721) 0:01:04.544 ******** 2026-01-05 00:47:49.376182 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376186 | orchestrator | 2026-01-05 00:47:49.376189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376193 | orchestrator | Monday 05 January 2026 00:47:42 +0000 (0:00:00.233) 0:01:04.778 ******** 2026-01-05 00:47:49.376206 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376211 | orchestrator | 2026-01-05 00:47:49.376222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376232 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.209) 0:01:04.988 ******** 2026-01-05 00:47:49.376236 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376240 | orchestrator | 2026-01-05 00:47:49.376243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 00:47:49.376281 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.184) 0:01:05.172 ******** 2026-01-05 00:47:49.376292 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376297 | orchestrator | 2026-01-05 00:47:49.376303 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 00:47:49.376308 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.203) 0:01:05.376 ******** 2026-01-05 00:47:49.376314 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376320 | orchestrator | 2026-01-05 00:47:49.376326 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 00:47:49.376332 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.275) 0:01:05.652 ******** 2026-01-05 00:47:49.376338 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2726894-ebb3-5d48-8b2e-e077f444c4ac'}}) 2026-01-05 00:47:49.376350 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edc09b40-6ec9-59c0-95b4-baacc31b5a92'}}) 2026-01-05 00:47:49.376356 | orchestrator | 2026-01-05 00:47:49.376370 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 00:47:49.376377 | orchestrator | Monday 05 January 2026 00:47:43 +0000 (0:00:00.183) 0:01:05.836 ******** 2026-01-05 00:47:49.376384 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'}) 2026-01-05 00:47:49.376392 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'}) 2026-01-05 00:47:49.376397 | orchestrator | 2026-01-05 00:47:49.376418 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 00:47:49.376445 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:01.938) 0:01:07.774 ******** 2026-01-05 00:47:49.376456 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:49.376469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:49.376480 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376494 | orchestrator | 2026-01-05 00:47:49.376503 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 00:47:49.376510 | orchestrator | Monday 05 January 2026 00:47:45 +0000 (0:00:00.163) 0:01:07.938 ******** 2026-01-05 00:47:49.376517 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'}) 2026-01-05 00:47:49.376531 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'}) 2026-01-05 00:47:49.376538 | orchestrator | 2026-01-05 00:47:49.376544 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 00:47:49.376551 | orchestrator | Monday 05 January 2026 00:47:47 +0000 (0:00:01.580) 0:01:09.518 ******** 2026-01-05 00:47:49.376560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:49.376581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:49.376588 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376611 | orchestrator | 2026-01-05 00:47:49.376625 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 00:47:49.376632 | orchestrator | Monday 05 January 2026 00:47:47 +0000 (0:00:00.192) 0:01:09.711 ******** 2026-01-05 00:47:49.376647 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376654 | orchestrator | 2026-01-05 00:47:49.376670 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 00:47:49.376681 | orchestrator | Monday 05 January 2026 00:47:47 +0000 (0:00:00.146) 0:01:09.858 ******** 2026-01-05 00:47:49.376708 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:49.376725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:49.376742 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376761 | orchestrator | 2026-01-05 00:47:49.376770 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 00:47:49.376786 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:00.199) 0:01:10.057 ******** 2026-01-05 00:47:49.376794 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376805 | orchestrator | 2026-01-05 00:47:49.376817 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 00:47:49.376827 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:00.170) 0:01:10.227 ******** 2026-01-05 00:47:49.376836 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:49.376846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:49.376858 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376865 | orchestrator | 2026-01-05 00:47:49.376873 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 00:47:49.376879 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:00.178) 0:01:10.406 ******** 2026-01-05 00:47:49.376886 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376892 | orchestrator | 2026-01-05 00:47:49.376899 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 00:47:49.376906 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:00.153) 0:01:10.559 ******** 2026-01-05 00:47:49.376912 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:49.376918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:49.376924 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:49.376938 | orchestrator | 2026-01-05 00:47:49.376944 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 00:47:49.376951 | orchestrator | Monday 05 January 2026 00:47:48 +0000 (0:00:00.167) 0:01:10.727 ******** 2026-01-05 00:47:49.376957 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:49.376963 | orchestrator | 2026-01-05 00:47:49.376969 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 00:47:49.376975 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:00.408) 0:01:11.135 ******** 2026-01-05 00:47:49.377009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:56.405452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:56.405572 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405585 | orchestrator | 2026-01-05 00:47:56.405594 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 00:47:56.405602 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:00.173) 0:01:11.308 ******** 2026-01-05 00:47:56.405609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:56.405617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:56.405623 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405629 | orchestrator | 2026-01-05 00:47:56.405636 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 00:47:56.405642 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:00.180) 0:01:11.488 ******** 2026-01-05 00:47:56.405646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:56.405650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:56.405677 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405681 | orchestrator | 2026-01-05 00:47:56.405685 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 00:47:56.405689 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:00.182) 0:01:11.671 ******** 2026-01-05 00:47:56.405693 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405696 | orchestrator | 2026-01-05 00:47:56.405700 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 00:47:56.405704 | orchestrator | Monday 05 January 2026 00:47:49 +0000 (0:00:00.195) 0:01:11.867 ******** 2026-01-05 00:47:56.405708 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405712 | orchestrator | 2026-01-05 00:47:56.405715 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 00:47:56.405719 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:00.183) 0:01:12.050 ******** 2026-01-05 00:47:56.405723 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405726 | orchestrator | 2026-01-05 00:47:56.405731 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 00:47:56.405734 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:00.187) 0:01:12.238 ******** 2026-01-05 00:47:56.405738 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:47:56.405743 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 00:47:56.405784 | orchestrator | } 2026-01-05 00:47:56.405788 | orchestrator | 2026-01-05 00:47:56.405792 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 00:47:56.405796 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:00.180) 0:01:12.418 ******** 2026-01-05 00:47:56.405800 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:47:56.405804 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 00:47:56.405809 | orchestrator | } 2026-01-05 00:47:56.405813 | orchestrator | 2026-01-05 00:47:56.405817 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 00:47:56.405821 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:00.196) 0:01:12.615 ******** 2026-01-05 00:47:56.405825 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:47:56.405829 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 00:47:56.405833 | orchestrator | } 2026-01-05 00:47:56.405838 | orchestrator | 2026-01-05 00:47:56.405845 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 00:47:56.405851 | orchestrator | Monday 05 January 2026 00:47:50 +0000 (0:00:00.191) 0:01:12.806 ******** 2026-01-05 00:47:56.405858 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:56.405862 | orchestrator | 2026-01-05 00:47:56.405866 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 00:47:56.405869 | orchestrator | Monday 05 January 2026 00:47:51 +0000 (0:00:00.532) 0:01:13.339 ******** 2026-01-05 00:47:56.405873 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:56.405877 | orchestrator | 2026-01-05 00:47:56.405881 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 00:47:56.405885 | orchestrator | Monday 05 January 2026 00:47:51 +0000 (0:00:00.496) 0:01:13.835 ******** 2026-01-05 00:47:56.405888 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:56.405892 | orchestrator | 2026-01-05 00:47:56.405920 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 00:47:56.405924 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:00.768) 0:01:14.603 ******** 2026-01-05 00:47:56.405928 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:56.405932 | orchestrator | 2026-01-05 00:47:56.405936 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 00:47:56.405940 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:00.182) 0:01:14.786 ******** 2026-01-05 00:47:56.405943 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405947 | orchestrator | 2026-01-05 00:47:56.405951 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 00:47:56.405960 | orchestrator | Monday 05 January 2026 00:47:52 +0000 (0:00:00.148) 0:01:14.935 ******** 2026-01-05 00:47:56.405965 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.405972 | orchestrator | 2026-01-05 00:47:56.405979 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 00:47:56.405999 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:00.139) 0:01:15.075 ******** 2026-01-05 00:47:56.406004 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:47:56.406008 | orchestrator |  "vgs_report": { 2026-01-05 00:47:56.406051 | orchestrator |  "vg": [] 2026-01-05 00:47:56.406073 | orchestrator |  } 2026-01-05 00:47:56.406078 | orchestrator | } 2026-01-05 00:47:56.406081 | orchestrator | 2026-01-05 00:47:56.406085 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 00:47:56.406089 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:00.187) 0:01:15.262 ******** 2026-01-05 00:47:56.406093 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406097 | orchestrator | 2026-01-05 00:47:56.406100 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 00:47:56.406104 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:00.194) 0:01:15.457 ******** 2026-01-05 00:47:56.406108 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406112 | orchestrator | 2026-01-05 00:47:56.406115 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 00:47:56.406119 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:00.140) 0:01:15.597 ******** 2026-01-05 00:47:56.406123 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406127 | orchestrator | 2026-01-05 00:47:56.406130 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 00:47:56.406134 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:00.141) 0:01:15.739 ******** 2026-01-05 00:47:56.406138 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406142 | orchestrator | 2026-01-05 00:47:56.406145 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 00:47:56.406149 | orchestrator | Monday 05 January 2026 00:47:53 +0000 (0:00:00.173) 0:01:15.912 ******** 2026-01-05 00:47:56.406153 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406157 | orchestrator | 2026-01-05 00:47:56.406160 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 00:47:56.406164 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:00.158) 0:01:16.070 ******** 2026-01-05 00:47:56.406168 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406172 | orchestrator | 2026-01-05 00:47:56.406175 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 00:47:56.406179 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:00.155) 0:01:16.226 ******** 2026-01-05 00:47:56.406183 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406187 | orchestrator | 2026-01-05 00:47:56.406190 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 00:47:56.406194 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:00.134) 0:01:16.360 ******** 2026-01-05 00:47:56.406198 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406202 | orchestrator | 2026-01-05 00:47:56.406205 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 00:47:56.406209 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:00.420) 0:01:16.780 ******** 2026-01-05 00:47:56.406213 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406216 | orchestrator | 2026-01-05 00:47:56.406288 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 00:47:56.406296 | orchestrator | Monday 05 January 2026 00:47:54 +0000 (0:00:00.157) 0:01:16.938 ******** 2026-01-05 00:47:56.406304 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406311 | orchestrator | 2026-01-05 00:47:56.406317 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 00:47:56.406330 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.142) 0:01:17.080 ******** 2026-01-05 00:47:56.406336 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406342 | orchestrator | 2026-01-05 00:47:56.406349 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 00:47:56.406355 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.133) 0:01:17.214 ******** 2026-01-05 00:47:56.406362 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406369 | orchestrator | 2026-01-05 00:47:56.406375 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 00:47:56.406382 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.164) 0:01:17.378 ******** 2026-01-05 00:47:56.406388 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406394 | orchestrator | 2026-01-05 00:47:56.406400 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 00:47:56.406407 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.178) 0:01:17.557 ******** 2026-01-05 00:47:56.406413 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406420 | orchestrator | 2026-01-05 00:47:56.406426 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 00:47:56.406433 | orchestrator | Monday 05 January 2026 00:47:55 +0000 (0:00:00.214) 0:01:17.772 ******** 2026-01-05 00:47:56.406440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:56.406447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:56.406452 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406458 | orchestrator | 2026-01-05 00:47:56.406464 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 00:47:56.406470 | orchestrator | Monday 05 January 2026 00:47:56 +0000 (0:00:00.189) 0:01:17.961 ******** 2026-01-05 00:47:56.406475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:56.406482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:56.406488 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:56.406494 | orchestrator | 2026-01-05 00:47:56.406500 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 00:47:56.406504 | orchestrator | Monday 05 January 2026 00:47:56 +0000 (0:00:00.182) 0:01:18.143 ******** 2026-01-05 00:47:56.406513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.904564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.904681 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.904694 | orchestrator | 2026-01-05 00:47:59.904704 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 00:47:59.904715 | orchestrator | Monday 05 January 2026 00:47:56 +0000 (0:00:00.192) 0:01:18.336 ******** 2026-01-05 00:47:59.904723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.904731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.904739 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.904747 | orchestrator | 2026-01-05 00:47:59.904754 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 00:47:59.904787 | orchestrator | Monday 05 January 2026 00:47:56 +0000 (0:00:00.184) 0:01:18.520 ******** 2026-01-05 00:47:59.904795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.904803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.904811 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.904818 | orchestrator | 2026-01-05 00:47:59.904825 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 00:47:59.904833 | orchestrator | Monday 05 January 2026 00:47:56 +0000 (0:00:00.181) 0:01:18.702 ******** 2026-01-05 00:47:59.904840 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.904862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.904870 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.904877 | orchestrator | 2026-01-05 00:47:59.904883 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 00:47:59.904890 | orchestrator | Monday 05 January 2026 00:47:57 +0000 (0:00:00.458) 0:01:19.161 ******** 2026-01-05 00:47:59.904897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.904903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.904910 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.904918 | orchestrator | 2026-01-05 00:47:59.904925 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 00:47:59.904933 | orchestrator | Monday 05 January 2026 00:47:57 +0000 (0:00:00.172) 0:01:19.333 ******** 2026-01-05 00:47:59.904939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.904947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.904954 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.904960 | orchestrator | 2026-01-05 00:47:59.904967 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 00:47:59.904974 | orchestrator | Monday 05 January 2026 00:47:57 +0000 (0:00:00.178) 0:01:19.511 ******** 2026-01-05 00:47:59.904981 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:59.904990 | orchestrator | 2026-01-05 00:47:59.904997 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 00:47:59.905004 | orchestrator | Monday 05 January 2026 00:47:58 +0000 (0:00:00.582) 0:01:20.094 ******** 2026-01-05 00:47:59.905011 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:59.905018 | orchestrator | 2026-01-05 00:47:59.905025 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 00:47:59.905032 | orchestrator | Monday 05 January 2026 00:47:58 +0000 (0:00:00.611) 0:01:20.706 ******** 2026-01-05 00:47:59.905039 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:47:59.905047 | orchestrator | 2026-01-05 00:47:59.905053 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 00:47:59.905060 | orchestrator | Monday 05 January 2026 00:47:58 +0000 (0:00:00.173) 0:01:20.879 ******** 2026-01-05 00:47:59.905068 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'vg_name': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'}) 2026-01-05 00:47:59.905078 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'vg_name': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'}) 2026-01-05 00:47:59.905095 | orchestrator | 2026-01-05 00:47:59.905101 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 00:47:59.905108 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:00.239) 0:01:21.119 ******** 2026-01-05 00:47:59.905134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.905142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.905150 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.905156 | orchestrator | 2026-01-05 00:47:59.905163 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 00:47:59.905171 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:00.174) 0:01:21.294 ******** 2026-01-05 00:47:59.905178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.905185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.905192 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.905198 | orchestrator | 2026-01-05 00:47:59.905206 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 00:47:59.905212 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:00.173) 0:01:21.467 ******** 2026-01-05 00:47:59.905220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'})  2026-01-05 00:47:59.905229 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'})  2026-01-05 00:47:59.905238 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:47:59.905298 | orchestrator | 2026-01-05 00:47:59.905305 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 00:47:59.905314 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:00.166) 0:01:21.633 ******** 2026-01-05 00:47:59.905322 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 00:47:59.905338 | orchestrator |  "lvm_report": { 2026-01-05 00:47:59.905345 | orchestrator |  "lv": [ 2026-01-05 00:47:59.905353 | orchestrator |  { 2026-01-05 00:47:59.905369 | orchestrator |  "lv_name": "osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92", 2026-01-05 00:47:59.905379 | orchestrator |  "vg_name": "ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92" 2026-01-05 00:47:59.905388 | orchestrator |  }, 2026-01-05 00:47:59.905396 | orchestrator |  { 2026-01-05 00:47:59.905405 | orchestrator |  "lv_name": "osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac", 2026-01-05 00:47:59.905412 | orchestrator |  "vg_name": "ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac" 2026-01-05 00:47:59.905419 | orchestrator |  } 2026-01-05 00:47:59.905427 | orchestrator |  ], 2026-01-05 00:47:59.905434 | orchestrator |  "pv": [ 2026-01-05 00:47:59.905442 | orchestrator |  { 2026-01-05 00:47:59.905451 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 00:47:59.905459 | orchestrator |  "vg_name": "ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac" 2026-01-05 00:47:59.905468 | orchestrator |  }, 2026-01-05 00:47:59.905475 | orchestrator |  { 2026-01-05 00:47:59.905482 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 00:47:59.905489 | orchestrator |  "vg_name": "ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92" 2026-01-05 00:47:59.905495 | orchestrator |  } 2026-01-05 00:47:59.905502 | orchestrator |  ] 2026-01-05 00:47:59.905515 | orchestrator |  } 2026-01-05 00:47:59.905523 | orchestrator | } 2026-01-05 00:47:59.905530 | orchestrator | 2026-01-05 00:47:59.905537 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:47:59.905544 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:47:59.905551 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:47:59.905559 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 00:47:59.905566 | orchestrator | 2026-01-05 00:47:59.905573 | orchestrator | 2026-01-05 00:47:59.905581 | orchestrator | 2026-01-05 00:47:59.905589 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:47:59.905596 | orchestrator | Monday 05 January 2026 00:47:59 +0000 (0:00:00.174) 0:01:21.807 ******** 2026-01-05 00:47:59.905603 | orchestrator | =============================================================================== 2026-01-05 00:47:59.905609 | orchestrator | Create block VGs -------------------------------------------------------- 5.88s 2026-01-05 00:47:59.905617 | orchestrator | Create block LVs -------------------------------------------------------- 4.53s 2026-01-05 00:47:59.905625 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.91s 2026-01-05 00:47:59.905633 | orchestrator | Add known partitions to the list of available block devices ------------- 1.90s 2026-01-05 00:47:59.905642 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2026-01-05 00:47:59.905649 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.79s 2026-01-05 00:47:59.905656 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.72s 2026-01-05 00:47:59.905664 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.69s 2026-01-05 00:47:59.905681 | orchestrator | Add known links to the list of available block devices ------------------ 1.59s 2026-01-05 00:48:00.454373 | orchestrator | Print LVM report data --------------------------------------------------- 1.14s 2026-01-05 00:48:00.454479 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-01-05 00:48:00.454489 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2026-01-05 00:48:00.454496 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.93s 2026-01-05 00:48:00.454503 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-05 00:48:00.454510 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-01-05 00:48:00.454516 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-01-05 00:48:00.454522 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.83s 2026-01-05 00:48:00.454528 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.82s 2026-01-05 00:48:00.454534 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.82s 2026-01-05 00:48:00.454539 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.79s 2026-01-05 00:48:13.129041 | orchestrator | 2026-01-05 00:48:13 | INFO  | Task 0f888dcb-e069-4c5c-87e2-291b183c52fd (facts) was prepared for execution. 2026-01-05 00:48:13.129178 | orchestrator | 2026-01-05 00:48:13 | INFO  | It takes a moment until task 0f888dcb-e069-4c5c-87e2-291b183c52fd (facts) has been started and output is visible here. 2026-01-05 00:48:27.490752 | orchestrator | 2026-01-05 00:48:27.490845 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 00:48:27.490854 | orchestrator | 2026-01-05 00:48:27.490861 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:48:27.490867 | orchestrator | Monday 05 January 2026 00:48:18 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-01-05 00:48:27.490901 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:27.490908 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:27.490914 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:27.490920 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:27.490925 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:48:27.490931 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:48:27.490936 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:48:27.490941 | orchestrator | 2026-01-05 00:48:27.490947 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:48:27.490954 | orchestrator | Monday 05 January 2026 00:48:19 +0000 (0:00:01.246) 0:00:01.538 ******** 2026-01-05 00:48:27.490960 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:27.490967 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:27.490973 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:27.490978 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:27.490984 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:27.490989 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:27.490995 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:27.491000 | orchestrator | 2026-01-05 00:48:27.491006 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:48:27.491011 | orchestrator | 2026-01-05 00:48:27.491017 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:48:27.491022 | orchestrator | Monday 05 January 2026 00:48:21 +0000 (0:00:01.465) 0:00:03.003 ******** 2026-01-05 00:48:27.491028 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:27.491033 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:27.491038 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:27.491044 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:27.491049 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:48:27.491054 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:48:27.491060 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:48:27.491065 | orchestrator | 2026-01-05 00:48:27.491071 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:48:27.491076 | orchestrator | 2026-01-05 00:48:27.491082 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:48:27.491087 | orchestrator | Monday 05 January 2026 00:48:26 +0000 (0:00:05.076) 0:00:08.080 ******** 2026-01-05 00:48:27.491092 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:48:27.491108 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:27.491114 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:27.491119 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:27.491124 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:27.491129 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:27.491134 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:27.491139 | orchestrator | 2026-01-05 00:48:27.491144 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:27.491149 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491156 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491162 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491167 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491172 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491177 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491188 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:48:27.491194 | orchestrator | 2026-01-05 00:48:27.491199 | orchestrator | 2026-01-05 00:48:27.491204 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:27.491209 | orchestrator | Monday 05 January 2026 00:48:26 +0000 (0:00:00.599) 0:00:08.680 ******** 2026-01-05 00:48:27.491260 | orchestrator | =============================================================================== 2026-01-05 00:48:27.491267 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.08s 2026-01-05 00:48:27.491272 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.47s 2026-01-05 00:48:27.491277 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-01-05 00:48:27.491282 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-01-05 00:48:40.170477 | orchestrator | 2026-01-05 00:48:40 | INFO  | Task 41749b23-bde1-4a60-a465-7be32268beed (frr) was prepared for execution. 2026-01-05 00:48:40.170591 | orchestrator | 2026-01-05 00:48:40 | INFO  | It takes a moment until task 41749b23-bde1-4a60-a465-7be32268beed (frr) has been started and output is visible here. 2026-01-05 00:49:05.912634 | orchestrator | 2026-01-05 00:49:05.912774 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-05 00:49:05.912793 | orchestrator | 2026-01-05 00:49:05.912806 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-05 00:49:05.912839 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.219) 0:00:00.219 ******** 2026-01-05 00:49:05.912852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:49:05.912865 | orchestrator | 2026-01-05 00:49:05.912876 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-05 00:49:05.912888 | orchestrator | Monday 05 January 2026 00:48:44 +0000 (0:00:00.232) 0:00:00.451 ******** 2026-01-05 00:49:05.912899 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:05.912912 | orchestrator | 2026-01-05 00:49:05.912923 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-05 00:49:05.912943 | orchestrator | Monday 05 January 2026 00:48:46 +0000 (0:00:01.163) 0:00:01.615 ******** 2026-01-05 00:49:05.912954 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:05.912965 | orchestrator | 2026-01-05 00:49:05.912976 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-05 00:49:05.912987 | orchestrator | Monday 05 January 2026 00:48:55 +0000 (0:00:09.750) 0:00:11.365 ******** 2026-01-05 00:49:05.912998 | orchestrator | ok: [testbed-manager] 2026-01-05 00:49:05.913011 | orchestrator | 2026-01-05 00:49:05.913022 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-05 00:49:05.913033 | orchestrator | Monday 05 January 2026 00:48:56 +0000 (0:00:01.033) 0:00:12.399 ******** 2026-01-05 00:49:05.913044 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:05.913055 | orchestrator | 2026-01-05 00:49:05.913066 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-05 00:49:05.913077 | orchestrator | Monday 05 January 2026 00:48:57 +0000 (0:00:00.936) 0:00:13.336 ******** 2026-01-05 00:49:05.913088 | orchestrator | ok: [testbed-manager] 2026-01-05 00:49:05.913099 | orchestrator | 2026-01-05 00:49:05.913110 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-05 00:49:05.913122 | orchestrator | Monday 05 January 2026 00:48:58 +0000 (0:00:01.177) 0:00:14.513 ******** 2026-01-05 00:49:05.913133 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:49:05.913146 | orchestrator | 2026-01-05 00:49:05.913158 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-05 00:49:05.913221 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.142) 0:00:14.655 ******** 2026-01-05 00:49:05.913261 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:49:05.913276 | orchestrator | 2026-01-05 00:49:05.913289 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-05 00:49:05.913302 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.165) 0:00:14.821 ******** 2026-01-05 00:49:05.913314 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:05.913326 | orchestrator | 2026-01-05 00:49:05.913339 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-05 00:49:05.913351 | orchestrator | Monday 05 January 2026 00:49:00 +0000 (0:00:00.936) 0:00:15.757 ******** 2026-01-05 00:49:05.913363 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-05 00:49:05.913376 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-05 00:49:05.913390 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-05 00:49:05.913403 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-05 00:49:05.913416 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-05 00:49:05.913428 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-05 00:49:05.913441 | orchestrator | 2026-01-05 00:49:05.913455 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-05 00:49:05.913467 | orchestrator | Monday 05 January 2026 00:49:02 +0000 (0:00:02.108) 0:00:17.866 ******** 2026-01-05 00:49:05.913480 | orchestrator | ok: [testbed-manager] 2026-01-05 00:49:05.913492 | orchestrator | 2026-01-05 00:49:05.913503 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-05 00:49:05.913514 | orchestrator | Monday 05 January 2026 00:49:04 +0000 (0:00:01.735) 0:00:19.601 ******** 2026-01-05 00:49:05.913525 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:05.913536 | orchestrator | 2026-01-05 00:49:05.913547 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:49:05.913558 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:49:05.913569 | orchestrator | 2026-01-05 00:49:05.913580 | orchestrator | 2026-01-05 00:49:05.913591 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:49:05.913601 | orchestrator | Monday 05 January 2026 00:49:05 +0000 (0:00:01.497) 0:00:21.098 ******** 2026-01-05 00:49:05.913612 | orchestrator | =============================================================================== 2026-01-05 00:49:05.913623 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.75s 2026-01-05 00:49:05.913634 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.11s 2026-01-05 00:49:05.913645 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.74s 2026-01-05 00:49:05.913656 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.50s 2026-01-05 00:49:05.913667 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.18s 2026-01-05 00:49:05.913697 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.16s 2026-01-05 00:49:05.913708 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-01-05 00:49:05.913719 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-01-05 00:49:05.913730 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.94s 2026-01-05 00:49:05.913741 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-01-05 00:49:05.913752 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-01-05 00:49:05.913763 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-01-05 00:49:06.355776 | orchestrator | 2026-01-05 00:49:06.359032 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jan 5 00:49:06 UTC 2026 2026-01-05 00:49:06.359126 | orchestrator | 2026-01-05 00:49:08.516933 | orchestrator | 2026-01-05 00:49:08 | INFO  | Collection nutshell is prepared for execution 2026-01-05 00:49:08.517058 | orchestrator | 2026-01-05 00:49:08 | INFO  | A [0] - dotfiles 2026-01-05 00:49:18.543938 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - homer 2026-01-05 00:49:18.544034 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - netdata 2026-01-05 00:49:18.544046 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - openstackclient 2026-01-05 00:49:18.544057 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - phpmyadmin 2026-01-05 00:49:18.544376 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - common 2026-01-05 00:49:18.549801 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- loadbalancer 2026-01-05 00:49:18.549869 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [2] --- opensearch 2026-01-05 00:49:18.550144 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [2] --- mariadb-ng 2026-01-05 00:49:18.550182 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [3] ---- horizon 2026-01-05 00:49:18.550589 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [3] ---- keystone 2026-01-05 00:49:18.550702 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- neutron 2026-01-05 00:49:18.551253 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [5] ------ wait-for-nova 2026-01-05 00:49:18.551276 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [6] ------- octavia 2026-01-05 00:49:18.552820 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- barbican 2026-01-05 00:49:18.552956 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- designate 2026-01-05 00:49:18.553347 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- ironic 2026-01-05 00:49:18.553367 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- placement 2026-01-05 00:49:18.553635 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- magnum 2026-01-05 00:49:18.554506 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- openvswitch 2026-01-05 00:49:18.554540 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [2] --- ovn 2026-01-05 00:49:18.554952 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- memcached 2026-01-05 00:49:18.554975 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- redis 2026-01-05 00:49:18.555219 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- rabbitmq-ng 2026-01-05 00:49:18.555464 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - kubernetes 2026-01-05 00:49:18.558347 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- kubeconfig 2026-01-05 00:49:18.558421 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- copy-kubeconfig 2026-01-05 00:49:18.558859 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [0] - ceph 2026-01-05 00:49:18.561113 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [1] -- ceph-pools 2026-01-05 00:49:18.561197 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [2] --- copy-ceph-keys 2026-01-05 00:49:18.561508 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [3] ---- cephclient 2026-01-05 00:49:18.561531 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-05 00:49:18.561735 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- wait-for-keystone 2026-01-05 00:49:18.561978 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-05 00:49:18.562382 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [5] ------ glance 2026-01-05 00:49:18.562448 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [5] ------ cinder 2026-01-05 00:49:18.562589 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [5] ------ nova 2026-01-05 00:49:18.563219 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [4] ----- prometheus 2026-01-05 00:49:18.563542 | orchestrator | 2026-01-05 00:49:18 | INFO  | A [5] ------ grafana 2026-01-05 00:49:18.744930 | orchestrator | 2026-01-05 00:49:18 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-05 00:49:18.747838 | orchestrator | 2026-01-05 00:49:18 | INFO  | Tasks are running in the background 2026-01-05 00:49:21.586407 | orchestrator | 2026-01-05 00:49:21 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-05 00:49:23.749377 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:23.749899 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:23.753981 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:23.754808 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:23.755499 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:23.758821 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:23.760006 | orchestrator | 2026-01-05 00:49:23 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:23.760202 | orchestrator | 2026-01-05 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:26.810090 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:26.811624 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:26.812364 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:26.815789 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:26.817870 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:26.818638 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:26.819528 | orchestrator | 2026-01-05 00:49:26 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:26.819580 | orchestrator | 2026-01-05 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:30.015008 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:30.015920 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:30.019011 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:30.019686 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:30.020304 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:30.020985 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:30.022623 | orchestrator | 2026-01-05 00:49:30 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:30.022705 | orchestrator | 2026-01-05 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:33.068850 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:33.071013 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:33.073540 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:33.074328 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:33.074446 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:33.076956 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:33.077659 | orchestrator | 2026-01-05 00:49:33 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:33.077689 | orchestrator | 2026-01-05 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:36.580875 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:36.581018 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:36.581806 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:36.586676 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:36.588823 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:36.590537 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:36.590605 | orchestrator | 2026-01-05 00:49:36 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:36.590622 | orchestrator | 2026-01-05 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:39.873547 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:39.877784 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:39.881866 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:39.886580 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:39.890881 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:39.897979 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:39.899171 | orchestrator | 2026-01-05 00:49:39 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:39.899225 | orchestrator | 2026-01-05 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:43.135298 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:43.136809 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:43.138806 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state STARTED 2026-01-05 00:49:43.143557 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:43.259790 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:43.259884 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:43.259891 | orchestrator | 2026-01-05 00:49:43 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:43.259896 | orchestrator | 2026-01-05 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:46.200989 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:46.208361 | orchestrator | 2026-01-05 00:49:46.208469 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-05 00:49:46.208485 | orchestrator | 2026-01-05 00:49:46.208497 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-05 00:49:46.208508 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.858) 0:00:00.858 ******** 2026-01-05 00:49:46.208520 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:49:46.208532 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:49:46.208542 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:49:46.208553 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:49:46.208564 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:46.208575 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:49:46.208621 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:49:46.208649 | orchestrator | 2026-01-05 00:49:46.208671 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-05 00:49:46.208690 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:04.478) 0:00:05.336 ******** 2026-01-05 00:49:46.208710 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:49:46.208729 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:49:46.208749 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:49:46.208770 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:49:46.208790 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:49:46.208809 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:49:46.208829 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:49:46.208847 | orchestrator | 2026-01-05 00:49:46.208861 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-05 00:49:46.208875 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:02.199) 0:00:07.536 ******** 2026-01-05 00:49:46.208894 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:38.228244', 'end': '2026-01-05 00:49:38.237236', 'delta': '0:00:00.008992', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.208922 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:38.250320', 'end': '2026-01-05 00:49:38.259078', 'delta': '0:00:00.008758', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.208960 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:38.568574', 'end': '2026-01-05 00:49:38.575701', 'delta': '0:00:00.007127', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.209007 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:38.060600', 'end': '2026-01-05 00:49:38.065091', 'delta': '0:00:00.004491', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.209021 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:38.668139', 'end': '2026-01-05 00:49:38.674856', 'delta': '0:00:00.006717', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.209526 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:38.940735', 'end': '2026-01-05 00:49:38.949359', 'delta': '0:00:00.008624', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.209566 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-05 00:49:39.041923', 'end': '2026-01-05 00:49:39.049985', 'delta': '0:00:00.008062', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-05 00:49:46.209611 | orchestrator | 2026-01-05 00:49:46.209634 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-05 00:49:46.209655 | orchestrator | Monday 05 January 2026 00:49:40 +0000 (0:00:01.523) 0:00:09.059 ******** 2026-01-05 00:49:46.209672 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:49:46.209684 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:49:46.209695 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:49:46.209706 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:49:46.209716 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:49:46.209727 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:49:46.209738 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:49:46.209748 | orchestrator | 2026-01-05 00:49:46.209766 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-05 00:49:46.209778 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:02.136) 0:00:11.196 ******** 2026-01-05 00:49:46.209789 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-05 00:49:46.209800 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-05 00:49:46.209811 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-05 00:49:46.209822 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-05 00:49:46.209832 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-05 00:49:46.209843 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-05 00:49:46.209854 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-05 00:49:46.209865 | orchestrator | 2026-01-05 00:49:46.209876 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:49:46.209903 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209917 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209928 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209939 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209950 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209961 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209972 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:46.209982 | orchestrator | 2026-01-05 00:49:46.209996 | orchestrator | 2026-01-05 00:49:46.210118 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:49:46.210179 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:01.634) 0:00:12.831 ******** 2026-01-05 00:49:46.210198 | orchestrator | =============================================================================== 2026-01-05 00:49:46.210233 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.48s 2026-01-05 00:49:46.210253 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.20s 2026-01-05 00:49:46.210271 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.14s 2026-01-05 00:49:46.210289 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.63s 2026-01-05 00:49:46.210308 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.52s 2026-01-05 00:49:46.210327 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:46.210346 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task ce21b8b5-3457-4e20-a79c-c45038e916b1 is in state SUCCESS 2026-01-05 00:49:46.220665 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:46.237465 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:46.273308 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:49:46.273449 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:46.273459 | orchestrator | 2026-01-05 00:49:46 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:46.273468 | orchestrator | 2026-01-05 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:50.132156 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:50.132254 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:50.132264 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:50.132271 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:50.132278 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:49:50.132285 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:50.132314 | orchestrator | 2026-01-05 00:49:49 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:50.132322 | orchestrator | 2026-01-05 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:52.794327 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:52.794424 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:52.794433 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:52.794440 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:52.794447 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:49:52.794454 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:52.794461 | orchestrator | 2026-01-05 00:49:52 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:52.794468 | orchestrator | 2026-01-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:55.964825 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:55.964939 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:55.964952 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:55.964961 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:55.966225 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:49:55.971274 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:55.974278 | orchestrator | 2026-01-05 00:49:55 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:55.974379 | orchestrator | 2026-01-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:49:59.086430 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:49:59.087873 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:49:59.091234 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:49:59.092579 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:49:59.094876 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:49:59.097388 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:49:59.099783 | orchestrator | 2026-01-05 00:49:59 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:49:59.099835 | orchestrator | 2026-01-05 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:02.202940 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:02.206324 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:02.208060 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:02.209712 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:50:02.211169 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:02.213425 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:02.213927 | orchestrator | 2026-01-05 00:50:02 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:02.214422 | orchestrator | 2026-01-05 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:05.339999 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:05.340891 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:05.347910 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:05.347997 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:50:05.350264 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:05.351730 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:05.354627 | orchestrator | 2026-01-05 00:50:05 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:05.354671 | orchestrator | 2026-01-05 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:08.539497 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:08.539618 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:08.539635 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:08.539651 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state STARTED 2026-01-05 00:50:08.539665 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:08.539679 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:08.539692 | orchestrator | 2026-01-05 00:50:08 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:08.539707 | orchestrator | 2026-01-05 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:11.603512 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:11.603964 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:11.604776 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:11.605531 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 88b7e875-ccc8-4e4b-b7d5-d110ed345bca is in state SUCCESS 2026-01-05 00:50:11.606544 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:11.615263 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:11.615373 | orchestrator | 2026-01-05 00:50:11 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:11.615396 | orchestrator | 2026-01-05 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:14.794582 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:14.796274 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:14.805968 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:14.810907 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:14.818173 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:14.836513 | orchestrator | 2026-01-05 00:50:14 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:14.836603 | orchestrator | 2026-01-05 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:17.897776 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:17.900252 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:17.903330 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:17.904312 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:17.904709 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:17.907535 | orchestrator | 2026-01-05 00:50:17 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:17.907587 | orchestrator | 2026-01-05 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:21.113657 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:21.113772 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:21.113797 | orchestrator | 2026-01-05 00:50:20 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:21.113818 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:21.113837 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:21.113851 | orchestrator | 2026-01-05 00:50:21 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:21.113863 | orchestrator | 2026-01-05 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:24.062756 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:24.063056 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state STARTED 2026-01-05 00:50:24.064951 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:24.065149 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:24.066537 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:24.067385 | orchestrator | 2026-01-05 00:50:24 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:24.067420 | orchestrator | 2026-01-05 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:27.122273 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:27.122734 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task ce436531-2e97-4e05-adb4-e748947eb3bb is in state SUCCESS 2026-01-05 00:50:27.133628 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:27.138940 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:27.141384 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:27.142193 | orchestrator | 2026-01-05 00:50:27 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:27.142247 | orchestrator | 2026-01-05 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:30.206847 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:30.208635 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:30.209096 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:30.210764 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:30.212167 | orchestrator | 2026-01-05 00:50:30 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:30.212209 | orchestrator | 2026-01-05 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:33.258250 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:33.258326 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:33.259179 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:33.260019 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:33.261714 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:33.261763 | orchestrator | 2026-01-05 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:36.329455 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:36.330752 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:36.335454 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:36.336268 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:36.337848 | orchestrator | 2026-01-05 00:50:36 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:36.337923 | orchestrator | 2026-01-05 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:39.476419 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:39.479415 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:39.482982 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:39.489956 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:39.500916 | orchestrator | 2026-01-05 00:50:39 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:39.502644 | orchestrator | 2026-01-05 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:42.608721 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:42.619462 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:42.619539 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:42.620803 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:42.625767 | orchestrator | 2026-01-05 00:50:42 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:42.625839 | orchestrator | 2026-01-05 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:45.715234 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:45.715292 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:45.715296 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:45.715300 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:45.715303 | orchestrator | 2026-01-05 00:50:45 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:45.715307 | orchestrator | 2026-01-05 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:48.769480 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:48.775568 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:48.776720 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:48.777743 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:48.778672 | orchestrator | 2026-01-05 00:50:48 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:48.778730 | orchestrator | 2026-01-05 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:51.879676 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:51.880634 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:51.881952 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:51.883661 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:51.885139 | orchestrator | 2026-01-05 00:50:51 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:51.885292 | orchestrator | 2026-01-05 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:54.935307 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:54.938837 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:54.942238 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:54.948411 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:54.953030 | orchestrator | 2026-01-05 00:50:54 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:54.953109 | orchestrator | 2026-01-05 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:50:58.020708 | orchestrator | 2026-01-05 00:50:58 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:50:58.028528 | orchestrator | 2026-01-05 00:50:58 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:50:58.030219 | orchestrator | 2026-01-05 00:50:58 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:50:58.034525 | orchestrator | 2026-01-05 00:50:58 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:50:58.038380 | orchestrator | 2026-01-05 00:50:58 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:50:58.038618 | orchestrator | 2026-01-05 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:01.088331 | orchestrator | 2026-01-05 00:51:01 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:01.092740 | orchestrator | 2026-01-05 00:51:01 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:51:01.092789 | orchestrator | 2026-01-05 00:51:01 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state STARTED 2026-01-05 00:51:01.093675 | orchestrator | 2026-01-05 00:51:01 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:01.098083 | orchestrator | 2026-01-05 00:51:01 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:01.098132 | orchestrator | 2026-01-05 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:04.151578 | orchestrator | 2026-01-05 00:51:04 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:04.154680 | orchestrator | 2026-01-05 00:51:04 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:51:04.154747 | orchestrator | 2026-01-05 00:51:04 | INFO  | Task 780039c7-d6b6-4c4d-ba9e-4bd4c1c39a9e is in state SUCCESS 2026-01-05 00:51:04.155629 | orchestrator | 2026-01-05 00:51:04.155661 | orchestrator | 2026-01-05 00:51:04.155667 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-05 00:51:04.155673 | orchestrator | 2026-01-05 00:51:04.155678 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-05 00:51:04.155683 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:01.080) 0:00:01.080 ******** 2026-01-05 00:51:04.155688 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:51:04.155696 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-05 00:51:04.155703 | orchestrator | } 2026-01-05 00:51:04.155708 | orchestrator | 2026-01-05 00:51:04.155713 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-05 00:51:04.155717 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.689) 0:00:01.769 ******** 2026-01-05 00:51:04.155721 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.155726 | orchestrator | 2026-01-05 00:51:04.155730 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-05 00:51:04.155734 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:02.538) 0:00:04.308 ******** 2026-01-05 00:51:04.155738 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-05 00:51:04.155742 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-05 00:51:04.155746 | orchestrator | 2026-01-05 00:51:04.155750 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-05 00:51:04.155754 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:01.040) 0:00:05.349 ******** 2026-01-05 00:51:04.155758 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.155762 | orchestrator | 2026-01-05 00:51:04.155766 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-05 00:51:04.155770 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:02.167) 0:00:07.516 ******** 2026-01-05 00:51:04.155773 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.155777 | orchestrator | 2026-01-05 00:51:04.155782 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-05 00:51:04.155786 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:02.556) 0:00:10.072 ******** 2026-01-05 00:51:04.155790 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-05 00:51:04.155805 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.155809 | orchestrator | 2026-01-05 00:51:04.155813 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-05 00:51:04.155821 | orchestrator | Monday 05 January 2026 00:50:08 +0000 (0:00:27.257) 0:00:37.329 ******** 2026-01-05 00:51:04.155825 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.155829 | orchestrator | 2026-01-05 00:51:04.155833 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:51:04.155837 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:04.155842 | orchestrator | 2026-01-05 00:51:04.155846 | orchestrator | 2026-01-05 00:51:04.155850 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:51:04.155854 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:02.355) 0:00:39.685 ******** 2026-01-05 00:51:04.155858 | orchestrator | =============================================================================== 2026-01-05 00:51:04.155862 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.26s 2026-01-05 00:51:04.155866 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.56s 2026-01-05 00:51:04.155870 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.54s 2026-01-05 00:51:04.155874 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.36s 2026-01-05 00:51:04.155877 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.17s 2026-01-05 00:51:04.155881 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.04s 2026-01-05 00:51:04.155885 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.69s 2026-01-05 00:51:04.155889 | orchestrator | 2026-01-05 00:51:04.155893 | orchestrator | 2026-01-05 00:51:04.155897 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-05 00:51:04.155901 | orchestrator | 2026-01-05 00:51:04.155905 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-05 00:51:04.155909 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.650) 0:00:00.650 ******** 2026-01-05 00:51:04.155913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-05 00:51:04.155918 | orchestrator | 2026-01-05 00:51:04.155922 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-05 00:51:04.155928 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.851) 0:00:01.502 ******** 2026-01-05 00:51:04.155934 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-05 00:51:04.155940 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-05 00:51:04.155947 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-05 00:51:04.155952 | orchestrator | 2026-01-05 00:51:04.155958 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-05 00:51:04.155964 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:03.085) 0:00:04.588 ******** 2026-01-05 00:51:04.155970 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.155977 | orchestrator | 2026-01-05 00:51:04.155983 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-05 00:51:04.155989 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:02.704) 0:00:07.292 ******** 2026-01-05 00:51:04.156006 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-05 00:51:04.156013 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.156021 | orchestrator | 2026-01-05 00:51:04.156160 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-05 00:51:04.156171 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:38.157) 0:00:45.450 ******** 2026-01-05 00:51:04.156185 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.156192 | orchestrator | 2026-01-05 00:51:04.156198 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-05 00:51:04.156205 | orchestrator | Monday 05 January 2026 00:50:18 +0000 (0:00:01.434) 0:00:46.885 ******** 2026-01-05 00:51:04.156213 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.156220 | orchestrator | 2026-01-05 00:51:04.156227 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-05 00:51:04.156234 | orchestrator | Monday 05 January 2026 00:50:19 +0000 (0:00:00.586) 0:00:47.471 ******** 2026-01-05 00:51:04.156241 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.156247 | orchestrator | 2026-01-05 00:51:04.156253 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-05 00:51:04.156257 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:02.722) 0:00:50.193 ******** 2026-01-05 00:51:04.156261 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.156265 | orchestrator | 2026-01-05 00:51:04.156268 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-05 00:51:04.156272 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:00.993) 0:00:51.187 ******** 2026-01-05 00:51:04.156276 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.156280 | orchestrator | 2026-01-05 00:51:04.156284 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-05 00:51:04.156288 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:00.578) 0:00:51.766 ******** 2026-01-05 00:51:04.156292 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.156296 | orchestrator | 2026-01-05 00:51:04.156300 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:51:04.156304 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:04.156308 | orchestrator | 2026-01-05 00:51:04.156312 | orchestrator | 2026-01-05 00:51:04.156316 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:51:04.156320 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:00.384) 0:00:52.151 ******** 2026-01-05 00:51:04.156328 | orchestrator | =============================================================================== 2026-01-05 00:51:04.156332 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.16s 2026-01-05 00:51:04.156336 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.09s 2026-01-05 00:51:04.156340 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.72s 2026-01-05 00:51:04.156344 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.70s 2026-01-05 00:51:04.156348 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.43s 2026-01-05 00:51:04.156352 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.99s 2026-01-05 00:51:04.156356 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.85s 2026-01-05 00:51:04.156361 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.59s 2026-01-05 00:51:04.156365 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.58s 2026-01-05 00:51:04.156369 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2026-01-05 00:51:04.156373 | orchestrator | 2026-01-05 00:51:04.156377 | orchestrator | 2026-01-05 00:51:04.156381 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-05 00:51:04.156385 | orchestrator | 2026-01-05 00:51:04.156389 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-05 00:51:04.156393 | orchestrator | Monday 05 January 2026 00:49:50 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-01-05 00:51:04.156397 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.156402 | orchestrator | 2026-01-05 00:51:04.156406 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-05 00:51:04.156413 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:01.771) 0:00:02.065 ******** 2026-01-05 00:51:04.156417 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-05 00:51:04.156421 | orchestrator | 2026-01-05 00:51:04.156426 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-05 00:51:04.156430 | orchestrator | Monday 05 January 2026 00:49:53 +0000 (0:00:01.090) 0:00:03.156 ******** 2026-01-05 00:51:04.156434 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.156438 | orchestrator | 2026-01-05 00:51:04.156442 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-05 00:51:04.156446 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:02.076) 0:00:05.232 ******** 2026-01-05 00:51:04.156450 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-05 00:51:04.156454 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:04.156458 | orchestrator | 2026-01-05 00:51:04.156462 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-05 00:51:04.156466 | orchestrator | Monday 05 January 2026 00:50:53 +0000 (0:00:58.354) 0:01:03.587 ******** 2026-01-05 00:51:04.156470 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:04.156475 | orchestrator | 2026-01-05 00:51:04.156479 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:51:04.156483 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:04.156487 | orchestrator | 2026-01-05 00:51:04.156491 | orchestrator | 2026-01-05 00:51:04.156495 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:51:04.156507 | orchestrator | Monday 05 January 2026 00:51:02 +0000 (0:00:08.470) 0:01:12.058 ******** 2026-01-05 00:51:04.156511 | orchestrator | =============================================================================== 2026-01-05 00:51:04.156515 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.35s 2026-01-05 00:51:04.156519 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.47s 2026-01-05 00:51:04.156523 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.08s 2026-01-05 00:51:04.156528 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.77s 2026-01-05 00:51:04.156532 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.09s 2026-01-05 00:51:04.156536 | orchestrator | 2026-01-05 00:51:04 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:04.158245 | orchestrator | 2026-01-05 00:51:04 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:04.158279 | orchestrator | 2026-01-05 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:07.206410 | orchestrator | 2026-01-05 00:51:07 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:07.209418 | orchestrator | 2026-01-05 00:51:07 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state STARTED 2026-01-05 00:51:07.210439 | orchestrator | 2026-01-05 00:51:07 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:07.212796 | orchestrator | 2026-01-05 00:51:07 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:07.212827 | orchestrator | 2026-01-05 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:10.269957 | orchestrator | 2026-01-05 00:51:10 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:10.272672 | orchestrator | 2026-01-05 00:51:10 | INFO  | Task 8d6c5ee5-fabe-4a1f-a933-3253ecf06391 is in state SUCCESS 2026-01-05 00:51:10.272856 | orchestrator | 2026-01-05 00:51:10.272874 | orchestrator | 2026-01-05 00:51:10.272894 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:51:10.272924 | orchestrator | 2026-01-05 00:51:10.272934 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:51:10.272943 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.793) 0:00:00.793 ******** 2026-01-05 00:51:10.272951 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-05 00:51:10.272959 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-05 00:51:10.272967 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-05 00:51:10.272975 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-05 00:51:10.272982 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-05 00:51:10.272990 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-05 00:51:10.272998 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-05 00:51:10.273005 | orchestrator | 2026-01-05 00:51:10.273014 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-05 00:51:10.273037 | orchestrator | 2026-01-05 00:51:10.273046 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-05 00:51:10.273053 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:01.453) 0:00:02.247 ******** 2026-01-05 00:51:10.273073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:51:10.273086 | orchestrator | 2026-01-05 00:51:10.273094 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-05 00:51:10.273102 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:01.287) 0:00:03.534 ******** 2026-01-05 00:51:10.273110 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:10.273119 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:10.273126 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:10.273133 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:10.273140 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:10.273148 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:10.273156 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:10.273163 | orchestrator | 2026-01-05 00:51:10.273172 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-05 00:51:10.273180 | orchestrator | Monday 05 January 2026 00:49:37 +0000 (0:00:01.668) 0:00:05.202 ******** 2026-01-05 00:51:10.273189 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:10.273198 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:10.273206 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:10.273214 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:10.273223 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:10.273231 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:10.273240 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:10.273249 | orchestrator | 2026-01-05 00:51:10.273258 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-05 00:51:10.273265 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:04.504) 0:00:09.707 ******** 2026-01-05 00:51:10.273274 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:10.273282 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:10.273290 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:10.273299 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:10.273307 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:10.273315 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:10.273323 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:10.273331 | orchestrator | 2026-01-05 00:51:10.273340 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-05 00:51:10.273348 | orchestrator | Monday 05 January 2026 00:49:44 +0000 (0:00:02.986) 0:00:12.694 ******** 2026-01-05 00:51:10.273357 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:10.273366 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:10.273374 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:10.273391 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:10.273399 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:10.273407 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:10.273414 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:10.273421 | orchestrator | 2026-01-05 00:51:10.273430 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-05 00:51:10.273438 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:14.635) 0:00:27.329 ******** 2026-01-05 00:51:10.273447 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:10.273457 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:10.273468 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:10.273476 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:10.273485 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:10.273494 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:10.273503 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:10.273511 | orchestrator | 2026-01-05 00:51:10.273520 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-05 00:51:10.273529 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:40.810) 0:01:08.140 ******** 2026-01-05 00:51:10.273540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:51:10.273551 | orchestrator | 2026-01-05 00:51:10.273560 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-05 00:51:10.273569 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:02.089) 0:01:10.229 ******** 2026-01-05 00:51:10.273579 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-05 00:51:10.273588 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-05 00:51:10.273596 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-05 00:51:10.273605 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-05 00:51:10.273629 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-05 00:51:10.273645 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-05 00:51:10.273654 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-05 00:51:10.273663 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-05 00:51:10.273672 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-05 00:51:10.273681 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-05 00:51:10.273689 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-05 00:51:10.273699 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-05 00:51:10.273708 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-05 00:51:10.273718 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-05 00:51:10.273726 | orchestrator | 2026-01-05 00:51:10.273736 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-05 00:51:10.273746 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:07.574) 0:01:17.803 ******** 2026-01-05 00:51:10.273755 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:10.273764 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:10.273773 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:10.273783 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:10.273791 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:10.273800 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:10.273808 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:10.273817 | orchestrator | 2026-01-05 00:51:10.273827 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-05 00:51:10.273836 | orchestrator | Monday 05 January 2026 00:50:51 +0000 (0:00:01.461) 0:01:19.265 ******** 2026-01-05 00:51:10.273845 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:10.273854 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:10.273871 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:10.273880 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:10.273888 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:10.273896 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:10.273904 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:10.273912 | orchestrator | 2026-01-05 00:51:10.273919 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-05 00:51:10.273927 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:03.012) 0:01:22.278 ******** 2026-01-05 00:51:10.273935 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:10.273943 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:10.273951 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:10.273960 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:10.273968 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:10.273977 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:10.273987 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:10.273995 | orchestrator | 2026-01-05 00:51:10.274003 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-05 00:51:10.274077 | orchestrator | Monday 05 January 2026 00:50:56 +0000 (0:00:02.398) 0:01:24.677 ******** 2026-01-05 00:51:10.274092 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:10.274101 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:10.274110 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:10.274119 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:10.274128 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:10.274137 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:10.274146 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:10.274155 | orchestrator | 2026-01-05 00:51:10.274164 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-05 00:51:10.274173 | orchestrator | Monday 05 January 2026 00:50:59 +0000 (0:00:02.584) 0:01:27.262 ******** 2026-01-05 00:51:10.274183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-05 00:51:10.274196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:51:10.274207 | orchestrator | 2026-01-05 00:51:10.274217 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-05 00:51:10.274226 | orchestrator | Monday 05 January 2026 00:51:01 +0000 (0:00:01.550) 0:01:28.812 ******** 2026-01-05 00:51:10.274236 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:10.274246 | orchestrator | 2026-01-05 00:51:10.274256 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-05 00:51:10.274264 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:02.713) 0:01:31.526 ******** 2026-01-05 00:51:10.274273 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:10.274283 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:10.274292 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:10.274301 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:10.274310 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:10.274318 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:10.274327 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:10.274335 | orchestrator | 2026-01-05 00:51:10.274344 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:51:10.274354 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274364 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274373 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274382 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274417 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274428 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274438 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:51:10.274458 | orchestrator | 2026-01-05 00:51:10.274467 | orchestrator | 2026-01-05 00:51:10.274476 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:51:10.274486 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:03.183) 0:01:34.709 ******** 2026-01-05 00:51:10.274494 | orchestrator | =============================================================================== 2026-01-05 00:51:10.274502 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.81s 2026-01-05 00:51:10.274510 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.64s 2026-01-05 00:51:10.274518 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.57s 2026-01-05 00:51:10.274525 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.50s 2026-01-05 00:51:10.274533 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.18s 2026-01-05 00:51:10.274541 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.01s 2026-01-05 00:51:10.274549 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.99s 2026-01-05 00:51:10.274557 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.71s 2026-01-05 00:51:10.274566 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.58s 2026-01-05 00:51:10.274574 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.40s 2026-01-05 00:51:10.274583 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.09s 2026-01-05 00:51:10.274591 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.67s 2026-01-05 00:51:10.274599 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.55s 2026-01-05 00:51:10.274607 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.46s 2026-01-05 00:51:10.274615 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.45s 2026-01-05 00:51:10.274623 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.29s 2026-01-05 00:51:10.276862 | orchestrator | 2026-01-05 00:51:10 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:10.277413 | orchestrator | 2026-01-05 00:51:10 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:10.277663 | orchestrator | 2026-01-05 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:13.339137 | orchestrator | 2026-01-05 00:51:13 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:13.339583 | orchestrator | 2026-01-05 00:51:13 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:13.340821 | orchestrator | 2026-01-05 00:51:13 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:13.340869 | orchestrator | 2026-01-05 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:16.397248 | orchestrator | 2026-01-05 00:51:16 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:16.399478 | orchestrator | 2026-01-05 00:51:16 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:16.402521 | orchestrator | 2026-01-05 00:51:16 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:16.402677 | orchestrator | 2026-01-05 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:19.492603 | orchestrator | 2026-01-05 00:51:19 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:19.497257 | orchestrator | 2026-01-05 00:51:19 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:19.499679 | orchestrator | 2026-01-05 00:51:19 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:19.500930 | orchestrator | 2026-01-05 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:22.578614 | orchestrator | 2026-01-05 00:51:22 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:22.578683 | orchestrator | 2026-01-05 00:51:22 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:22.579415 | orchestrator | 2026-01-05 00:51:22 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:22.579447 | orchestrator | 2026-01-05 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:25.640104 | orchestrator | 2026-01-05 00:51:25 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:25.641316 | orchestrator | 2026-01-05 00:51:25 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:25.643670 | orchestrator | 2026-01-05 00:51:25 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:25.643707 | orchestrator | 2026-01-05 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:28.719373 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:28.719531 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:28.722646 | orchestrator | 2026-01-05 00:51:28 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:28.722706 | orchestrator | 2026-01-05 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:31.786039 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:31.788671 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:31.790510 | orchestrator | 2026-01-05 00:51:31 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:31.790576 | orchestrator | 2026-01-05 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:34.860776 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:34.862299 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:34.863821 | orchestrator | 2026-01-05 00:51:34 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:34.863877 | orchestrator | 2026-01-05 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:37.926939 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:37.929076 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:37.934365 | orchestrator | 2026-01-05 00:51:37 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:37.934424 | orchestrator | 2026-01-05 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:40.977961 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:40.979446 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:40.980876 | orchestrator | 2026-01-05 00:51:40 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:40.981022 | orchestrator | 2026-01-05 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:44.050366 | orchestrator | 2026-01-05 00:51:44 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:44.050421 | orchestrator | 2026-01-05 00:51:44 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:44.050429 | orchestrator | 2026-01-05 00:51:44 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:44.050436 | orchestrator | 2026-01-05 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:47.126181 | orchestrator | 2026-01-05 00:51:47 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:47.127451 | orchestrator | 2026-01-05 00:51:47 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:47.129210 | orchestrator | 2026-01-05 00:51:47 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:47.130115 | orchestrator | 2026-01-05 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:50.198526 | orchestrator | 2026-01-05 00:51:50 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:50.199817 | orchestrator | 2026-01-05 00:51:50 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:50.202827 | orchestrator | 2026-01-05 00:51:50 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:50.202901 | orchestrator | 2026-01-05 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:53.252609 | orchestrator | 2026-01-05 00:51:53 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:53.253669 | orchestrator | 2026-01-05 00:51:53 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:53.255399 | orchestrator | 2026-01-05 00:51:53 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:53.255464 | orchestrator | 2026-01-05 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:56.290402 | orchestrator | 2026-01-05 00:51:56 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:56.291445 | orchestrator | 2026-01-05 00:51:56 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state STARTED 2026-01-05 00:51:56.292416 | orchestrator | 2026-01-05 00:51:56 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:56.292448 | orchestrator | 2026-01-05 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:51:59.331240 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:51:59.331379 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:51:59.332767 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:51:59.337140 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task 4f27a129-13ee-4982-b236-f327c85cf95b is in state SUCCESS 2026-01-05 00:51:59.338844 | orchestrator | 2026-01-05 00:51:59.338912 | orchestrator | 2026-01-05 00:51:59.338983 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-05 00:51:59.338992 | orchestrator | 2026-01-05 00:51:59.338998 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:51:59.339005 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:00.369) 0:00:00.369 ******** 2026-01-05 00:51:59.339013 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:51:59.339020 | orchestrator | 2026-01-05 00:51:59.339027 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-05 00:51:59.339060 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:01.532) 0:00:01.902 ******** 2026-01-05 00:51:59.339068 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339074 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339080 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339086 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339093 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339099 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339105 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339126 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:51:59.339133 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339139 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339145 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339151 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339158 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339164 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339171 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:51:59.339177 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339203 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339211 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339217 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339223 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339229 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:51:59.339236 | orchestrator | 2026-01-05 00:51:59.339242 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:51:59.339248 | orchestrator | Monday 05 January 2026 00:49:29 +0000 (0:00:04.129) 0:00:06.032 ******** 2026-01-05 00:51:59.339261 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:51:59.339269 | orchestrator | 2026-01-05 00:51:59.339275 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-05 00:51:59.339324 | orchestrator | Monday 05 January 2026 00:49:31 +0000 (0:00:01.446) 0:00:07.478 ******** 2026-01-05 00:51:59.339343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339374 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339422 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.339478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339544 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.339572 | orchestrator | 2026-01-05 00:51:59.339578 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-05 00:51:59.339590 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:05.862) 0:00:13.341 ******** 2026-01-05 00:51:59.339601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339673 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:59.339680 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:59.339687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339709 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:59.339719 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339738 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:51:59.339748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339767 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:51:59.339774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339870 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:51:59.339877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339908 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:51:59.339915 | orchestrator | 2026-01-05 00:51:59.339922 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-05 00:51:59.339928 | orchestrator | Monday 05 January 2026 00:49:38 +0000 (0:00:01.738) 0:00:15.079 ******** 2026-01-05 00:51:59.339952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339964 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339971 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.339978 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:51:59.339987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.339993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340006 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:59.340021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.340047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340068 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:59.340075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.340082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340098 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:59.340104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.340582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340636 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:51:59.340648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.340675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340698 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:51:59.340709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:51:59.340729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.340753 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:51:59.340765 | orchestrator | 2026-01-05 00:51:59.340776 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-05 00:51:59.340847 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:03.729) 0:00:18.808 ******** 2026-01-05 00:51:59.340857 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:51:59.340863 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:59.340870 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:59.340876 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:59.340882 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:51:59.340896 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:51:59.340988 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:51:59.341001 | orchestrator | 2026-01-05 00:51:59.341013 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-05 00:51:59.341037 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:01.561) 0:00:20.370 ******** 2026-01-05 00:51:59.341048 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:51:59.341059 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:51:59.341070 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:51:59.341080 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:51:59.341091 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:51:59.341102 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:51:59.341113 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:51:59.341124 | orchestrator | 2026-01-05 00:51:59.341135 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-05 00:51:59.341146 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:02.021) 0:00:22.391 ******** 2026-01-05 00:51:59.341160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341174 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341239 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.341289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341396 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.341417 | orchestrator | 2026-01-05 00:51:59.341423 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-05 00:51:59.341430 | orchestrator | Monday 05 January 2026 00:49:53 +0000 (0:00:07.165) 0:00:29.556 ******** 2026-01-05 00:51:59.341436 | orchestrator | [WARNING]: Skipped 2026-01-05 00:51:59.341443 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-05 00:51:59.341455 | orchestrator | to this access issue: 2026-01-05 00:51:59.341467 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-05 00:51:59.341473 | orchestrator | directory 2026-01-05 00:51:59.341480 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:59.341488 | orchestrator | 2026-01-05 00:51:59.341500 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-05 00:51:59.341510 | orchestrator | Monday 05 January 2026 00:49:54 +0000 (0:00:01.817) 0:00:31.373 ******** 2026-01-05 00:51:59.341520 | orchestrator | [WARNING]: Skipped 2026-01-05 00:51:59.341526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-05 00:51:59.341538 | orchestrator | to this access issue: 2026-01-05 00:51:59.341545 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-05 00:51:59.341552 | orchestrator | directory 2026-01-05 00:51:59.341562 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:59.341573 | orchestrator | 2026-01-05 00:51:59.341585 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-05 00:51:59.341596 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:01.501) 0:00:32.875 ******** 2026-01-05 00:51:59.341607 | orchestrator | [WARNING]: Skipped 2026-01-05 00:51:59.341618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-05 00:51:59.341629 | orchestrator | to this access issue: 2026-01-05 00:51:59.341640 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-05 00:51:59.341651 | orchestrator | directory 2026-01-05 00:51:59.341662 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:59.341674 | orchestrator | 2026-01-05 00:51:59.341685 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-05 00:51:59.341697 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:00.986) 0:00:33.862 ******** 2026-01-05 00:51:59.341707 | orchestrator | [WARNING]: Skipped 2026-01-05 00:51:59.341719 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-05 00:51:59.341730 | orchestrator | to this access issue: 2026-01-05 00:51:59.341741 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-05 00:51:59.341752 | orchestrator | directory 2026-01-05 00:51:59.341764 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:59.341775 | orchestrator | 2026-01-05 00:51:59.341786 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-05 00:51:59.341797 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.852) 0:00:34.714 ******** 2026-01-05 00:51:59.341809 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.341819 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.341825 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.341831 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.341838 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.341845 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.341852 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.341859 | orchestrator | 2026-01-05 00:51:59.341864 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-05 00:51:59.341871 | orchestrator | Monday 05 January 2026 00:50:04 +0000 (0:00:06.005) 0:00:40.720 ******** 2026-01-05 00:51:59.341880 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341968 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341976 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341982 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:59.341989 | orchestrator | 2026-01-05 00:51:59.341995 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-05 00:51:59.342001 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:06.912) 0:00:47.633 ******** 2026-01-05 00:51:59.342083 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.342105 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.342116 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.342127 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.342146 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.342158 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.342169 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.342181 | orchestrator | 2026-01-05 00:51:59.342193 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-05 00:51:59.342204 | orchestrator | Monday 05 January 2026 00:50:15 +0000 (0:00:04.310) 0:00:51.944 ******** 2026-01-05 00:51:59.342218 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342257 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342271 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342295 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342309 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342322 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342345 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342394 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342413 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342431 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342446 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:59.342475 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342482 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342494 | orchestrator | 2026-01-05 00:51:59.342507 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-05 00:51:59.342519 | orchestrator | Monday 05 January 2026 00:50:19 +0000 (0:00:04.136) 0:00:56.081 ******** 2026-01-05 00:51:59.342531 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342543 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342556 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342567 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342578 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342590 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342601 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:59.342612 | orchestrator | 2026-01-05 00:51:59.342624 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-05 00:51:59.342635 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:04.701) 0:01:00.782 ******** 2026-01-05 00:51:59.342646 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342658 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342677 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342689 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342701 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342713 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342724 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:59.342737 | orchestrator | 2026-01-05 00:51:59.342748 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-05 00:51:59.342760 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:02.469) 0:01:03.252 ******** 2026-01-05 00:51:59.342773 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342842 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:59.342904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342929 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.342986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:59.343113 | orchestrator | 2026-01-05 00:51:59.343124 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-05 00:51:59.343136 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:03.574) 0:01:06.827 ******** 2026-01-05 00:51:59.343147 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.343158 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.343168 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.343180 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.343191 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.343201 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.343212 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.343223 | orchestrator | 2026-01-05 00:51:59.343233 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-05 00:51:59.343244 | orchestrator | Monday 05 January 2026 00:50:32 +0000 (0:00:01.864) 0:01:08.692 ******** 2026-01-05 00:51:59.343254 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.343265 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.343275 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.343286 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.343296 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.343307 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.343325 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.343337 | orchestrator | 2026-01-05 00:51:59.343348 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343366 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:01.399) 0:01:10.092 ******** 2026-01-05 00:51:59.343378 | orchestrator | 2026-01-05 00:51:59.343388 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343400 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:00.065) 0:01:10.158 ******** 2026-01-05 00:51:59.343411 | orchestrator | 2026-01-05 00:51:59.343422 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343433 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:00.062) 0:01:10.220 ******** 2026-01-05 00:51:59.343444 | orchestrator | 2026-01-05 00:51:59.343456 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343467 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:00.073) 0:01:10.294 ******** 2026-01-05 00:51:59.343477 | orchestrator | 2026-01-05 00:51:59.343488 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343498 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.328) 0:01:10.622 ******** 2026-01-05 00:51:59.343509 | orchestrator | 2026-01-05 00:51:59.343519 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343530 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.079) 0:01:10.701 ******** 2026-01-05 00:51:59.343542 | orchestrator | 2026-01-05 00:51:59.343553 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:51:59.343563 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.069) 0:01:10.771 ******** 2026-01-05 00:51:59.343573 | orchestrator | 2026-01-05 00:51:59.343585 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-05 00:51:59.343603 | orchestrator | Monday 05 January 2026 00:50:34 +0000 (0:00:00.093) 0:01:10.864 ******** 2026-01-05 00:51:59.343614 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.343626 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.343638 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.343649 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.343659 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.343669 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.343681 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.343691 | orchestrator | 2026-01-05 00:51:59.343702 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-05 00:51:59.343713 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:35.866) 0:01:46.731 ******** 2026-01-05 00:51:59.343724 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.343734 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.343745 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.343756 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.343767 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.343779 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.343791 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.343801 | orchestrator | 2026-01-05 00:51:59.343812 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-05 00:51:59.343823 | orchestrator | Monday 05 January 2026 00:51:42 +0000 (0:00:32.565) 0:02:19.296 ******** 2026-01-05 00:51:59.343835 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:51:59.343846 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:51:59.343852 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:51:59.343858 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:51:59.343864 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:51:59.343870 | orchestrator | ok: [testbed-manager] 2026-01-05 00:51:59.343876 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:51:59.343882 | orchestrator | 2026-01-05 00:51:59.343888 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-05 00:51:59.343895 | orchestrator | Monday 05 January 2026 00:51:45 +0000 (0:00:02.875) 0:02:22.172 ******** 2026-01-05 00:51:59.343901 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:59.343921 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:59.343946 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:59.343953 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:59.343960 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:59.343968 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:59.343979 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:59.343990 | orchestrator | 2026-01-05 00:51:59.344000 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:51:59.344013 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344026 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344038 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344050 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344061 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344069 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344075 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:51:59.344081 | orchestrator | 2026-01-05 00:51:59.344088 | orchestrator | 2026-01-05 00:51:59.344099 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:51:59.344106 | orchestrator | Monday 05 January 2026 00:51:56 +0000 (0:00:11.123) 0:02:33.296 ******** 2026-01-05 00:51:59.344112 | orchestrator | =============================================================================== 2026-01-05 00:51:59.344119 | orchestrator | common : Restart fluentd container ------------------------------------- 35.87s 2026-01-05 00:51:59.344125 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.57s 2026-01-05 00:51:59.344130 | orchestrator | common : Restart cron container ---------------------------------------- 11.12s 2026-01-05 00:51:59.344140 | orchestrator | common : Copying over config.json files for services -------------------- 7.16s 2026-01-05 00:51:59.344151 | orchestrator | common : Copying over cron logrotate config file ------------------------ 6.91s 2026-01-05 00:51:59.344162 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.01s 2026-01-05 00:51:59.344169 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.86s 2026-01-05 00:51:59.344175 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.70s 2026-01-05 00:51:59.344182 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.31s 2026-01-05 00:51:59.344193 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.14s 2026-01-05 00:51:59.344204 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.13s 2026-01-05 00:51:59.344215 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.73s 2026-01-05 00:51:59.344227 | orchestrator | common : Check common containers ---------------------------------------- 3.57s 2026-01-05 00:51:59.344239 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.88s 2026-01-05 00:51:59.344260 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.47s 2026-01-05 00:51:59.344271 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.02s 2026-01-05 00:51:59.344282 | orchestrator | common : Creating log volume -------------------------------------------- 1.86s 2026-01-05 00:51:59.344302 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.82s 2026-01-05 00:51:59.344314 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.74s 2026-01-05 00:51:59.344326 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.56s 2026-01-05 00:51:59.344337 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:51:59.344350 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task 145276d6-fd36-4f73-b4b0-daf3b9e36731 is in state STARTED 2026-01-05 00:51:59.344360 | orchestrator | 2026-01-05 00:51:59 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:51:59.344372 | orchestrator | 2026-01-05 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:02.383166 | orchestrator | 2026-01-05 00:52:02 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:02.386000 | orchestrator | 2026-01-05 00:52:02 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:02.386202 | orchestrator | 2026-01-05 00:52:02 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:02.387350 | orchestrator | 2026-01-05 00:52:02 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:02.388379 | orchestrator | 2026-01-05 00:52:02 | INFO  | Task 145276d6-fd36-4f73-b4b0-daf3b9e36731 is in state STARTED 2026-01-05 00:52:02.389305 | orchestrator | 2026-01-05 00:52:02 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:02.389330 | orchestrator | 2026-01-05 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:05.445182 | orchestrator | 2026-01-05 00:52:05 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:05.448065 | orchestrator | 2026-01-05 00:52:05 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:05.449195 | orchestrator | 2026-01-05 00:52:05 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:05.450460 | orchestrator | 2026-01-05 00:52:05 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:05.451607 | orchestrator | 2026-01-05 00:52:05 | INFO  | Task 145276d6-fd36-4f73-b4b0-daf3b9e36731 is in state STARTED 2026-01-05 00:52:05.453278 | orchestrator | 2026-01-05 00:52:05 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:05.453309 | orchestrator | 2026-01-05 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:08.534389 | orchestrator | 2026-01-05 00:52:08 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:08.534447 | orchestrator | 2026-01-05 00:52:08 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:08.534453 | orchestrator | 2026-01-05 00:52:08 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:08.534456 | orchestrator | 2026-01-05 00:52:08 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:08.534459 | orchestrator | 2026-01-05 00:52:08 | INFO  | Task 145276d6-fd36-4f73-b4b0-daf3b9e36731 is in state STARTED 2026-01-05 00:52:08.534463 | orchestrator | 2026-01-05 00:52:08 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:08.534466 | orchestrator | 2026-01-05 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:11.551439 | orchestrator | 2026-01-05 00:52:11 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:11.552271 | orchestrator | 2026-01-05 00:52:11 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:11.553621 | orchestrator | 2026-01-05 00:52:11 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:11.554892 | orchestrator | 2026-01-05 00:52:11 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:11.555786 | orchestrator | 2026-01-05 00:52:11 | INFO  | Task 145276d6-fd36-4f73-b4b0-daf3b9e36731 is in state STARTED 2026-01-05 00:52:11.557080 | orchestrator | 2026-01-05 00:52:11 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:11.557105 | orchestrator | 2026-01-05 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:14.600312 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:14.601324 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:14.602494 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:14.603648 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:14.605051 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:14.605829 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task 145276d6-fd36-4f73-b4b0-daf3b9e36731 is in state SUCCESS 2026-01-05 00:52:14.606897 | orchestrator | 2026-01-05 00:52:14 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:14.607029 | orchestrator | 2026-01-05 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:17.650553 | orchestrator | 2026-01-05 00:52:17 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:17.651524 | orchestrator | 2026-01-05 00:52:17 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:17.652507 | orchestrator | 2026-01-05 00:52:17 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:17.653320 | orchestrator | 2026-01-05 00:52:17 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:17.656687 | orchestrator | 2026-01-05 00:52:17 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:17.658156 | orchestrator | 2026-01-05 00:52:17 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:17.658206 | orchestrator | 2026-01-05 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:20.720457 | orchestrator | 2026-01-05 00:52:20 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:20.720643 | orchestrator | 2026-01-05 00:52:20 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:20.723613 | orchestrator | 2026-01-05 00:52:20 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:20.723760 | orchestrator | 2026-01-05 00:52:20 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:20.725339 | orchestrator | 2026-01-05 00:52:20 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:20.726758 | orchestrator | 2026-01-05 00:52:20 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:20.726784 | orchestrator | 2026-01-05 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:23.790192 | orchestrator | 2026-01-05 00:52:23 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:23.796011 | orchestrator | 2026-01-05 00:52:23 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:23.798118 | orchestrator | 2026-01-05 00:52:23 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:23.799247 | orchestrator | 2026-01-05 00:52:23 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:23.800659 | orchestrator | 2026-01-05 00:52:23 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:23.801866 | orchestrator | 2026-01-05 00:52:23 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:23.801946 | orchestrator | 2026-01-05 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:26.862515 | orchestrator | 2026-01-05 00:52:26 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:26.863066 | orchestrator | 2026-01-05 00:52:26 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:26.865211 | orchestrator | 2026-01-05 00:52:26 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:26.867534 | orchestrator | 2026-01-05 00:52:26 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state STARTED 2026-01-05 00:52:26.870155 | orchestrator | 2026-01-05 00:52:26 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:26.871021 | orchestrator | 2026-01-05 00:52:26 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:26.871119 | orchestrator | 2026-01-05 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:29.926472 | orchestrator | 2026-01-05 00:52:29 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:29.926578 | orchestrator | 2026-01-05 00:52:29 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:29.927101 | orchestrator | 2026-01-05 00:52:29 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:29.927554 | orchestrator | 2026-01-05 00:52:29 | INFO  | Task 805747d0-2436-4b3a-969b-c20ce72e185e is in state SUCCESS 2026-01-05 00:52:29.930494 | orchestrator | 2026-01-05 00:52:29.930571 | orchestrator | 2026-01-05 00:52:29.930586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:52:29.930599 | orchestrator | 2026-01-05 00:52:29.930611 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:52:29.930623 | orchestrator | Monday 05 January 2026 00:52:01 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-01-05 00:52:29.930634 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:29.930646 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:29.930657 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:29.930668 | orchestrator | 2026-01-05 00:52:29.930679 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:52:29.930690 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.347) 0:00:00.623 ******** 2026-01-05 00:52:29.930702 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-05 00:52:29.930713 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-05 00:52:29.930723 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-05 00:52:29.930734 | orchestrator | 2026-01-05 00:52:29.930748 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-05 00:52:29.930768 | orchestrator | 2026-01-05 00:52:29.930784 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-05 00:52:29.930800 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.676) 0:00:01.299 ******** 2026-01-05 00:52:29.930848 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:29.930915 | orchestrator | 2026-01-05 00:52:29.930934 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-05 00:52:29.930952 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.636) 0:00:01.935 ******** 2026-01-05 00:52:29.931163 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 00:52:29.931175 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 00:52:29.931186 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 00:52:29.931198 | orchestrator | 2026-01-05 00:52:29.931209 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-05 00:52:29.931219 | orchestrator | Monday 05 January 2026 00:52:04 +0000 (0:00:00.897) 0:00:02.833 ******** 2026-01-05 00:52:29.931230 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 00:52:29.931241 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 00:52:29.931252 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 00:52:29.931263 | orchestrator | 2026-01-05 00:52:29.931273 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-05 00:52:29.931284 | orchestrator | Monday 05 January 2026 00:52:06 +0000 (0:00:02.401) 0:00:05.234 ******** 2026-01-05 00:52:29.931295 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:29.931306 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:29.931316 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:29.931327 | orchestrator | 2026-01-05 00:52:29.931338 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-05 00:52:29.931349 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:02.813) 0:00:08.047 ******** 2026-01-05 00:52:29.931360 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:29.931370 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:29.931381 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:29.931392 | orchestrator | 2026-01-05 00:52:29.931403 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:52:29.931414 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:29.931427 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:29.931438 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:29.931449 | orchestrator | 2026-01-05 00:52:29.931460 | orchestrator | 2026-01-05 00:52:29.931470 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:52:29.931481 | orchestrator | Monday 05 January 2026 00:52:12 +0000 (0:00:03.166) 0:00:11.214 ******** 2026-01-05 00:52:29.931492 | orchestrator | =============================================================================== 2026-01-05 00:52:29.931543 | orchestrator | memcached : Restart memcached container --------------------------------- 3.17s 2026-01-05 00:52:29.931555 | orchestrator | memcached : Check memcached container ----------------------------------- 2.81s 2026-01-05 00:52:29.931566 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.40s 2026-01-05 00:52:29.931578 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2026-01-05 00:52:29.931589 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-01-05 00:52:29.931599 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.64s 2026-01-05 00:52:29.931610 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-01-05 00:52:29.931621 | orchestrator | 2026-01-05 00:52:29.931632 | orchestrator | 2026-01-05 00:52:29.931642 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:52:29.931667 | orchestrator | 2026-01-05 00:52:29.931678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:52:29.931689 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.374) 0:00:00.374 ******** 2026-01-05 00:52:29.931699 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:29.931710 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:29.931721 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:29.931732 | orchestrator | 2026-01-05 00:52:29.931743 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:52:29.931781 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.420) 0:00:00.795 ******** 2026-01-05 00:52:29.931800 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-05 00:52:29.931819 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-05 00:52:29.931838 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-05 00:52:29.931857 | orchestrator | 2026-01-05 00:52:29.931908 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-05 00:52:29.931925 | orchestrator | 2026-01-05 00:52:29.931942 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-05 00:52:29.931960 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.572) 0:00:01.367 ******** 2026-01-05 00:52:29.931979 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:29.931994 | orchestrator | 2026-01-05 00:52:29.932006 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-05 00:52:29.932022 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.735) 0:00:02.103 ******** 2026-01-05 00:52:29.932044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932151 | orchestrator | 2026-01-05 00:52:29.932167 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-05 00:52:29.932184 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:01.288) 0:00:03.391 ******** 2026-01-05 00:52:29.932202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932353 | orchestrator | 2026-01-05 00:52:29.932372 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-05 00:52:29.932389 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:03.117) 0:00:06.509 ******** 2026-01-05 00:52:29.932401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932491 | orchestrator | 2026-01-05 00:52:29.932518 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-05 00:52:29.932537 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:02.988) 0:00:09.498 ******** 2026-01-05 00:52:29.932556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 00:52:29.932672 | orchestrator | 2026-01-05 00:52:29.932684 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:52:29.932694 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:01.851) 0:00:11.349 ******** 2026-01-05 00:52:29.932706 | orchestrator | 2026-01-05 00:52:29.932717 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:52:29.932734 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:00.175) 0:00:11.524 ******** 2026-01-05 00:52:29.932746 | orchestrator | 2026-01-05 00:52:29.932757 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 00:52:29.932767 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:00.154) 0:00:11.678 ******** 2026-01-05 00:52:29.932778 | orchestrator | 2026-01-05 00:52:29.932788 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-05 00:52:29.932799 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:00.166) 0:00:11.845 ******** 2026-01-05 00:52:29.932810 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:29.932821 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:29.932831 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:29.932842 | orchestrator | 2026-01-05 00:52:29.932853 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-05 00:52:29.932924 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:08.715) 0:00:20.561 ******** 2026-01-05 00:52:29.932938 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:29.932951 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:29.932970 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:29.932985 | orchestrator | 2026-01-05 00:52:29.932996 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:52:29.933007 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:29.933019 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:29.933030 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:52:29.933041 | orchestrator | 2026-01-05 00:52:29.933061 | orchestrator | 2026-01-05 00:52:29.933072 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:52:29.933083 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:06.769) 0:00:27.330 ******** 2026-01-05 00:52:29.933093 | orchestrator | =============================================================================== 2026-01-05 00:52:29.933104 | orchestrator | redis : Restart redis container ----------------------------------------- 8.72s 2026-01-05 00:52:29.933115 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.77s 2026-01-05 00:52:29.933126 | orchestrator | redis : Copying over default config.json files -------------------------- 3.12s 2026-01-05 00:52:29.933137 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2026-01-05 00:52:29.933148 | orchestrator | redis : Check redis containers ------------------------------------------ 1.85s 2026-01-05 00:52:29.933158 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.29s 2026-01-05 00:52:29.933177 | orchestrator | redis : include_tasks --------------------------------------------------- 0.74s 2026-01-05 00:52:29.933188 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-01-05 00:52:29.933199 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.50s 2026-01-05 00:52:29.933209 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-01-05 00:52:29.933220 | orchestrator | 2026-01-05 00:52:29 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:29.933232 | orchestrator | 2026-01-05 00:52:29 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:29.933243 | orchestrator | 2026-01-05 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:33.049655 | orchestrator | 2026-01-05 00:52:33 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:33.049751 | orchestrator | 2026-01-05 00:52:33 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:33.050132 | orchestrator | 2026-01-05 00:52:33 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:33.050978 | orchestrator | 2026-01-05 00:52:33 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:33.052347 | orchestrator | 2026-01-05 00:52:33 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:33.052450 | orchestrator | 2026-01-05 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:36.088075 | orchestrator | 2026-01-05 00:52:36 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:36.089163 | orchestrator | 2026-01-05 00:52:36 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:36.092679 | orchestrator | 2026-01-05 00:52:36 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:36.095460 | orchestrator | 2026-01-05 00:52:36 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:36.098471 | orchestrator | 2026-01-05 00:52:36 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:36.098596 | orchestrator | 2026-01-05 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:39.195632 | orchestrator | 2026-01-05 00:52:39 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:39.196677 | orchestrator | 2026-01-05 00:52:39 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:39.198318 | orchestrator | 2026-01-05 00:52:39 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:39.200961 | orchestrator | 2026-01-05 00:52:39 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:39.201026 | orchestrator | 2026-01-05 00:52:39 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:39.201034 | orchestrator | 2026-01-05 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:42.279776 | orchestrator | 2026-01-05 00:52:42 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:42.279981 | orchestrator | 2026-01-05 00:52:42 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:42.283657 | orchestrator | 2026-01-05 00:52:42 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:42.286544 | orchestrator | 2026-01-05 00:52:42 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:42.286627 | orchestrator | 2026-01-05 00:52:42 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:42.286650 | orchestrator | 2026-01-05 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:45.361532 | orchestrator | 2026-01-05 00:52:45 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:45.361648 | orchestrator | 2026-01-05 00:52:45 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:45.361667 | orchestrator | 2026-01-05 00:52:45 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:45.361685 | orchestrator | 2026-01-05 00:52:45 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:45.361695 | orchestrator | 2026-01-05 00:52:45 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:45.361706 | orchestrator | 2026-01-05 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:48.594740 | orchestrator | 2026-01-05 00:52:48 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:48.594897 | orchestrator | 2026-01-05 00:52:48 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:48.595235 | orchestrator | 2026-01-05 00:52:48 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:48.595268 | orchestrator | 2026-01-05 00:52:48 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:48.596106 | orchestrator | 2026-01-05 00:52:48 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:48.596141 | orchestrator | 2026-01-05 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:51.662883 | orchestrator | 2026-01-05 00:52:51 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:51.664779 | orchestrator | 2026-01-05 00:52:51 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:51.666180 | orchestrator | 2026-01-05 00:52:51 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:51.668534 | orchestrator | 2026-01-05 00:52:51 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:51.671250 | orchestrator | 2026-01-05 00:52:51 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:51.671279 | orchestrator | 2026-01-05 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:54.727093 | orchestrator | 2026-01-05 00:52:54 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:54.736422 | orchestrator | 2026-01-05 00:52:54 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:54.739781 | orchestrator | 2026-01-05 00:52:54 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:54.744943 | orchestrator | 2026-01-05 00:52:54 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:54.747726 | orchestrator | 2026-01-05 00:52:54 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:54.747771 | orchestrator | 2026-01-05 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:52:57.938764 | orchestrator | 2026-01-05 00:52:57 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:52:57.939275 | orchestrator | 2026-01-05 00:52:57 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:52:57.940440 | orchestrator | 2026-01-05 00:52:57 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:52:57.941198 | orchestrator | 2026-01-05 00:52:57 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:52:57.942237 | orchestrator | 2026-01-05 00:52:57 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:52:57.942267 | orchestrator | 2026-01-05 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:01.022119 | orchestrator | 2026-01-05 00:53:00 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:01.022219 | orchestrator | 2026-01-05 00:53:00 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:01.022231 | orchestrator | 2026-01-05 00:53:00 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:01.022238 | orchestrator | 2026-01-05 00:53:00 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:01.022246 | orchestrator | 2026-01-05 00:53:00 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:01.022253 | orchestrator | 2026-01-05 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:04.036724 | orchestrator | 2026-01-05 00:53:04 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:04.036891 | orchestrator | 2026-01-05 00:53:04 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:04.042178 | orchestrator | 2026-01-05 00:53:04 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:04.042277 | orchestrator | 2026-01-05 00:53:04 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:04.042289 | orchestrator | 2026-01-05 00:53:04 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:04.042311 | orchestrator | 2026-01-05 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:07.098418 | orchestrator | 2026-01-05 00:53:07 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:07.098981 | orchestrator | 2026-01-05 00:53:07 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:07.102124 | orchestrator | 2026-01-05 00:53:07 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:07.103697 | orchestrator | 2026-01-05 00:53:07 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:07.104693 | orchestrator | 2026-01-05 00:53:07 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:07.104732 | orchestrator | 2026-01-05 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:10.184700 | orchestrator | 2026-01-05 00:53:10 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:10.185668 | orchestrator | 2026-01-05 00:53:10 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:10.190753 | orchestrator | 2026-01-05 00:53:10 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:10.192348 | orchestrator | 2026-01-05 00:53:10 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:10.193334 | orchestrator | 2026-01-05 00:53:10 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:10.193579 | orchestrator | 2026-01-05 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:13.232837 | orchestrator | 2026-01-05 00:53:13 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:13.234091 | orchestrator | 2026-01-05 00:53:13 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:13.236085 | orchestrator | 2026-01-05 00:53:13 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:13.237936 | orchestrator | 2026-01-05 00:53:13 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:13.239143 | orchestrator | 2026-01-05 00:53:13 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:13.239934 | orchestrator | 2026-01-05 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:16.295707 | orchestrator | 2026-01-05 00:53:16 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:16.298598 | orchestrator | 2026-01-05 00:53:16 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:16.301734 | orchestrator | 2026-01-05 00:53:16 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:16.303038 | orchestrator | 2026-01-05 00:53:16 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:16.309100 | orchestrator | 2026-01-05 00:53:16 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:16.309200 | orchestrator | 2026-01-05 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:19.350125 | orchestrator | 2026-01-05 00:53:19 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:19.352130 | orchestrator | 2026-01-05 00:53:19 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:19.353570 | orchestrator | 2026-01-05 00:53:19 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:19.355453 | orchestrator | 2026-01-05 00:53:19 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:19.358050 | orchestrator | 2026-01-05 00:53:19 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state STARTED 2026-01-05 00:53:19.358081 | orchestrator | 2026-01-05 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:22.399436 | orchestrator | 2026-01-05 00:53:22 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:22.403125 | orchestrator | 2026-01-05 00:53:22 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:22.404913 | orchestrator | 2026-01-05 00:53:22 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:22.405180 | orchestrator | 2026-01-05 00:53:22 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:22.406188 | orchestrator | 2026-01-05 00:53:22 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:22.407493 | orchestrator | 2026-01-05 00:53:22 | INFO  | Task 0576ebc6-e624-4a02-b57c-3821846b0041 is in state SUCCESS 2026-01-05 00:53:22.408124 | orchestrator | 2026-01-05 00:53:22.410102 | orchestrator | 2026-01-05 00:53:22.410148 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:53:22.410157 | orchestrator | 2026-01-05 00:53:22.410163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:53:22.410170 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.443) 0:00:00.443 ******** 2026-01-05 00:53:22.410177 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:53:22.410184 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:53:22.410190 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:53:22.410197 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:53:22.410203 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:53:22.410209 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:53:22.410215 | orchestrator | 2026-01-05 00:53:22.410222 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:53:22.410229 | orchestrator | Monday 05 January 2026 00:52:04 +0000 (0:00:01.431) 0:00:01.875 ******** 2026-01-05 00:53:22.410235 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:53:22.410242 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:53:22.410248 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:53:22.410255 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:53:22.410261 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:53:22.410267 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 00:53:22.410274 | orchestrator | 2026-01-05 00:53:22.410280 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-05 00:53:22.410286 | orchestrator | 2026-01-05 00:53:22.410292 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-05 00:53:22.410302 | orchestrator | Monday 05 January 2026 00:52:05 +0000 (0:00:00.791) 0:00:02.667 ******** 2026-01-05 00:53:22.410318 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:53:22.410334 | orchestrator | 2026-01-05 00:53:22.410347 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:53:22.410359 | orchestrator | Monday 05 January 2026 00:52:06 +0000 (0:00:01.694) 0:00:04.361 ******** 2026-01-05 00:53:22.410373 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 00:53:22.410386 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 00:53:22.410401 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 00:53:22.410415 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 00:53:22.410429 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 00:53:22.410443 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 00:53:22.410457 | orchestrator | 2026-01-05 00:53:22.410472 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:53:22.410486 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:01.697) 0:00:06.059 ******** 2026-01-05 00:53:22.410499 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 00:53:22.410514 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 00:53:22.410520 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 00:53:22.410526 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 00:53:22.410532 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 00:53:22.410538 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 00:53:22.410565 | orchestrator | 2026-01-05 00:53:22.410572 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:53:22.410577 | orchestrator | Monday 05 January 2026 00:52:10 +0000 (0:00:02.077) 0:00:08.136 ******** 2026-01-05 00:53:22.410584 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-05 00:53:22.410590 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:22.410598 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-05 00:53:22.410603 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:22.410609 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-05 00:53:22.410615 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:22.410621 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-05 00:53:22.410627 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:53:22.410632 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-05 00:53:22.410638 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:53:22.410644 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-05 00:53:22.410649 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:53:22.410656 | orchestrator | 2026-01-05 00:53:22.410673 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-05 00:53:22.410690 | orchestrator | Monday 05 January 2026 00:52:12 +0000 (0:00:01.636) 0:00:09.773 ******** 2026-01-05 00:53:22.410711 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:22.410719 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:22.410727 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:22.410737 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:53:22.410781 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:53:22.410791 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:53:22.410799 | orchestrator | 2026-01-05 00:53:22.410808 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-05 00:53:22.410817 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:01.167) 0:00:10.940 ******** 2026-01-05 00:53:22.410858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.410990 | orchestrator | 2026-01-05 00:53:22.410999 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-05 00:53:22.411008 | orchestrator | Monday 05 January 2026 00:52:16 +0000 (0:00:02.982) 0:00:13.923 ******** 2026-01-05 00:53:22.411017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411132 | orchestrator | 2026-01-05 00:53:22.411138 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-05 00:53:22.411145 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:05.565) 0:00:19.489 ******** 2026-01-05 00:53:22.411151 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:22.411158 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:22.411164 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:22.411170 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:53:22.411181 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:53:22.411187 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:53:22.411194 | orchestrator | 2026-01-05 00:53:22.411200 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-05 00:53:22.411206 | orchestrator | Monday 05 January 2026 00:52:24 +0000 (0:00:02.523) 0:00:22.012 ******** 2026-01-05 00:53:22.411212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 00:53:22.411315 | orchestrator | 2026-01-05 00:53:22.411327 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:53:22.411336 | orchestrator | Monday 05 January 2026 00:52:28 +0000 (0:00:03.647) 0:00:25.660 ******** 2026-01-05 00:53:22.411342 | orchestrator | 2026-01-05 00:53:22.411347 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:53:22.411353 | orchestrator | Monday 05 January 2026 00:52:28 +0000 (0:00:00.534) 0:00:26.195 ******** 2026-01-05 00:53:22.411358 | orchestrator | 2026-01-05 00:53:22.411364 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:53:22.411370 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:00.522) 0:00:26.717 ******** 2026-01-05 00:53:22.411375 | orchestrator | 2026-01-05 00:53:22.411381 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:53:22.411386 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:00.414) 0:00:27.132 ******** 2026-01-05 00:53:22.411392 | orchestrator | 2026-01-05 00:53:22.411397 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:53:22.411403 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:00.309) 0:00:27.442 ******** 2026-01-05 00:53:22.411408 | orchestrator | 2026-01-05 00:53:22.411416 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 00:53:22.411428 | orchestrator | Monday 05 January 2026 00:52:30 +0000 (0:00:00.213) 0:00:27.656 ******** 2026-01-05 00:53:22.411438 | orchestrator | 2026-01-05 00:53:22.411443 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-05 00:53:22.411449 | orchestrator | Monday 05 January 2026 00:52:30 +0000 (0:00:00.258) 0:00:27.914 ******** 2026-01-05 00:53:22.411455 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:22.411465 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:22.411472 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:53:22.411478 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:53:22.411484 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:22.411491 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:53:22.411499 | orchestrator | 2026-01-05 00:53:22.411508 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-05 00:53:22.411516 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:10.152) 0:00:38.067 ******** 2026-01-05 00:53:22.411522 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:53:22.411531 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:53:22.411539 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:53:22.411545 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:53:22.411551 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:53:22.411557 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:53:22.411564 | orchestrator | 2026-01-05 00:53:22.411570 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 00:53:22.411577 | orchestrator | Monday 05 January 2026 00:52:42 +0000 (0:00:01.869) 0:00:39.937 ******** 2026-01-05 00:53:22.411583 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:22.411588 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:53:22.411602 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:53:22.411607 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:53:22.411614 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:22.411620 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:22.411625 | orchestrator | 2026-01-05 00:53:22.411631 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-05 00:53:22.411637 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:10.480) 0:00:50.417 ******** 2026-01-05 00:53:22.411642 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-05 00:53:22.411650 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-05 00:53:22.411656 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-05 00:53:22.411662 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-05 00:53:22.411675 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-05 00:53:22.411689 | orchestrator | changed: [testbed-node-5] =2026-01-05 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:22.411693 | orchestrator | > (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-05 00:53:22.411697 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-05 00:53:22.411701 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-05 00:53:22.411705 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-05 00:53:22.411709 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-05 00:53:22.411712 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-05 00:53:22.411716 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-05 00:53:22.411720 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:53:22.411724 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:53:22.411728 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:53:22.411731 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:53:22.411735 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:53:22.411741 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 00:53:22.411747 | orchestrator | 2026-01-05 00:53:22.411806 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-05 00:53:22.411813 | orchestrator | Monday 05 January 2026 00:53:01 +0000 (0:00:08.474) 0:00:58.892 ******** 2026-01-05 00:53:22.411821 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-05 00:53:22.411828 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:53:22.411835 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-05 00:53:22.411841 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:53:22.411848 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-05 00:53:22.411854 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:53:22.411861 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-05 00:53:22.411871 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-05 00:53:22.411875 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-05 00:53:22.411879 | orchestrator | 2026-01-05 00:53:22.411883 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-05 00:53:22.411886 | orchestrator | Monday 05 January 2026 00:53:04 +0000 (0:00:03.402) 0:01:02.295 ******** 2026-01-05 00:53:22.411890 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:53:22.411894 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:53:22.411898 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:53:22.411902 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:53:22.411905 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-05 00:53:22.411909 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:53:22.411913 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:53:22.411917 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:53:22.411921 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-05 00:53:22.411925 | orchestrator | 2026-01-05 00:53:22.411928 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 00:53:22.411932 | orchestrator | Monday 05 January 2026 00:53:08 +0000 (0:00:03.861) 0:01:06.156 ******** 2026-01-05 00:53:22.411936 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:22.411940 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:53:22.411943 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:22.411947 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:22.411951 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:53:22.411954 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:53:22.411958 | orchestrator | 2026-01-05 00:53:22.411962 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:53:22.411967 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:53:22.411973 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:53:22.411976 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:53:22.411984 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:53:22.411993 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:53:22.411996 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:53:22.412000 | orchestrator | 2026-01-05 00:53:22.412004 | orchestrator | 2026-01-05 00:53:22.412008 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:53:22.412012 | orchestrator | Monday 05 January 2026 00:53:18 +0000 (0:00:10.170) 0:01:16.327 ******** 2026-01-05 00:53:22.412016 | orchestrator | =============================================================================== 2026-01-05 00:53:22.412019 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.65s 2026-01-05 00:53:22.412023 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.15s 2026-01-05 00:53:22.412027 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.47s 2026-01-05 00:53:22.412031 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.57s 2026-01-05 00:53:22.412035 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.86s 2026-01-05 00:53:22.412038 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.65s 2026-01-05 00:53:22.412051 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.40s 2026-01-05 00:53:22.412055 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.98s 2026-01-05 00:53:22.412059 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.52s 2026-01-05 00:53:22.412063 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.25s 2026-01-05 00:53:22.412067 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.08s 2026-01-05 00:53:22.412071 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.87s 2026-01-05 00:53:22.412074 | orchestrator | module-load : Load modules ---------------------------------------------- 1.70s 2026-01-05 00:53:22.412078 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.69s 2026-01-05 00:53:22.412082 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.64s 2026-01-05 00:53:22.412086 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.43s 2026-01-05 00:53:22.412089 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.17s 2026-01-05 00:53:22.412093 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-01-05 00:53:25.446534 | orchestrator | 2026-01-05 00:53:25 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:25.447992 | orchestrator | 2026-01-05 00:53:25 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:25.448913 | orchestrator | 2026-01-05 00:53:25 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:25.449581 | orchestrator | 2026-01-05 00:53:25 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:25.450970 | orchestrator | 2026-01-05 00:53:25 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:25.451012 | orchestrator | 2026-01-05 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:28.485441 | orchestrator | 2026-01-05 00:53:28 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:28.487141 | orchestrator | 2026-01-05 00:53:28 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:28.487970 | orchestrator | 2026-01-05 00:53:28 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:28.489822 | orchestrator | 2026-01-05 00:53:28 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:28.491155 | orchestrator | 2026-01-05 00:53:28 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:28.491208 | orchestrator | 2026-01-05 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:31.543774 | orchestrator | 2026-01-05 00:53:31 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:31.545144 | orchestrator | 2026-01-05 00:53:31 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:31.546557 | orchestrator | 2026-01-05 00:53:31 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:31.548110 | orchestrator | 2026-01-05 00:53:31 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:31.549275 | orchestrator | 2026-01-05 00:53:31 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:31.549337 | orchestrator | 2026-01-05 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:34.600930 | orchestrator | 2026-01-05 00:53:34 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:34.601425 | orchestrator | 2026-01-05 00:53:34 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:34.602650 | orchestrator | 2026-01-05 00:53:34 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:34.605914 | orchestrator | 2026-01-05 00:53:34 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:34.606805 | orchestrator | 2026-01-05 00:53:34 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:34.606870 | orchestrator | 2026-01-05 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:37.674386 | orchestrator | 2026-01-05 00:53:37 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:37.675046 | orchestrator | 2026-01-05 00:53:37 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:37.675663 | orchestrator | 2026-01-05 00:53:37 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:37.677127 | orchestrator | 2026-01-05 00:53:37 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:37.677884 | orchestrator | 2026-01-05 00:53:37 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:37.678386 | orchestrator | 2026-01-05 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:40.775129 | orchestrator | 2026-01-05 00:53:40 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:40.776190 | orchestrator | 2026-01-05 00:53:40 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:40.778007 | orchestrator | 2026-01-05 00:53:40 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:40.780544 | orchestrator | 2026-01-05 00:53:40 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:40.782917 | orchestrator | 2026-01-05 00:53:40 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:40.782951 | orchestrator | 2026-01-05 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:43.860085 | orchestrator | 2026-01-05 00:53:43 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:43.860219 | orchestrator | 2026-01-05 00:53:43 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:43.860905 | orchestrator | 2026-01-05 00:53:43 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:43.864366 | orchestrator | 2026-01-05 00:53:43 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:43.865293 | orchestrator | 2026-01-05 00:53:43 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:43.865325 | orchestrator | 2026-01-05 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:46.961731 | orchestrator | 2026-01-05 00:53:46 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:46.962275 | orchestrator | 2026-01-05 00:53:46 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:46.963085 | orchestrator | 2026-01-05 00:53:46 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:46.964390 | orchestrator | 2026-01-05 00:53:46 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:46.965221 | orchestrator | 2026-01-05 00:53:46 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:46.965283 | orchestrator | 2026-01-05 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:50.016388 | orchestrator | 2026-01-05 00:53:50 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:50.017023 | orchestrator | 2026-01-05 00:53:50 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:50.017908 | orchestrator | 2026-01-05 00:53:50 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:50.018792 | orchestrator | 2026-01-05 00:53:50 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:50.020525 | orchestrator | 2026-01-05 00:53:50 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:50.020565 | orchestrator | 2026-01-05 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:53.077465 | orchestrator | 2026-01-05 00:53:53 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:53.078535 | orchestrator | 2026-01-05 00:53:53 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:53.079988 | orchestrator | 2026-01-05 00:53:53 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:53.082600 | orchestrator | 2026-01-05 00:53:53 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:53.084529 | orchestrator | 2026-01-05 00:53:53 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:53.084820 | orchestrator | 2026-01-05 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:56.335604 | orchestrator | 2026-01-05 00:53:56 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:56.335723 | orchestrator | 2026-01-05 00:53:56 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:56.335736 | orchestrator | 2026-01-05 00:53:56 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:56.335744 | orchestrator | 2026-01-05 00:53:56 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:56.335751 | orchestrator | 2026-01-05 00:53:56 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:56.335760 | orchestrator | 2026-01-05 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:53:59.416881 | orchestrator | 2026-01-05 00:53:59 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:53:59.417017 | orchestrator | 2026-01-05 00:53:59 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:53:59.417034 | orchestrator | 2026-01-05 00:53:59 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:53:59.417046 | orchestrator | 2026-01-05 00:53:59 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:53:59.417058 | orchestrator | 2026-01-05 00:53:59 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:53:59.417070 | orchestrator | 2026-01-05 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:02.436549 | orchestrator | 2026-01-05 00:54:02 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:54:02.437248 | orchestrator | 2026-01-05 00:54:02 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:02.439536 | orchestrator | 2026-01-05 00:54:02 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:02.444118 | orchestrator | 2026-01-05 00:54:02 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:02.444195 | orchestrator | 2026-01-05 00:54:02 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:02.444208 | orchestrator | 2026-01-05 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:05.537209 | orchestrator | 2026-01-05 00:54:05 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:54:05.537831 | orchestrator | 2026-01-05 00:54:05 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:05.540572 | orchestrator | 2026-01-05 00:54:05 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:05.541035 | orchestrator | 2026-01-05 00:54:05 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:05.541610 | orchestrator | 2026-01-05 00:54:05 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:05.541637 | orchestrator | 2026-01-05 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:08.595117 | orchestrator | 2026-01-05 00:54:08 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:54:08.595223 | orchestrator | 2026-01-05 00:54:08 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:08.596234 | orchestrator | 2026-01-05 00:54:08 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:08.599622 | orchestrator | 2026-01-05 00:54:08 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:08.599933 | orchestrator | 2026-01-05 00:54:08 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:08.599955 | orchestrator | 2026-01-05 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:11.709418 | orchestrator | 2026-01-05 00:54:11 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:54:11.709744 | orchestrator | 2026-01-05 00:54:11 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:11.710620 | orchestrator | 2026-01-05 00:54:11 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:11.711243 | orchestrator | 2026-01-05 00:54:11 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:11.711967 | orchestrator | 2026-01-05 00:54:11 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:11.712006 | orchestrator | 2026-01-05 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:14.790342 | orchestrator | 2026-01-05 00:54:14 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state STARTED 2026-01-05 00:54:14.794911 | orchestrator | 2026-01-05 00:54:14 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:14.795805 | orchestrator | 2026-01-05 00:54:14 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:14.798880 | orchestrator | 2026-01-05 00:54:14 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:14.798916 | orchestrator | 2026-01-05 00:54:14 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:14.798922 | orchestrator | 2026-01-05 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:17.872461 | orchestrator | 2026-01-05 00:54:17 | INFO  | Task dbbc60a4-fa24-4f89-b875-867d7634f6b7 is in state SUCCESS 2026-01-05 00:54:17.874213 | orchestrator | 2026-01-05 00:54:17.874271 | orchestrator | 2026-01-05 00:54:17.874295 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-05 00:54:17.874304 | orchestrator | 2026-01-05 00:54:17.874311 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-05 00:54:17.874318 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:00.163) 0:00:00.163 ******** 2026-01-05 00:54:17.874324 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:17.874332 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:17.874338 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:17.874344 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.874350 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.874357 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.874361 | orchestrator | 2026-01-05 00:54:17.874365 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-05 00:54:17.874369 | orchestrator | Monday 05 January 2026 00:49:23 +0000 (0:00:00.552) 0:00:00.716 ******** 2026-01-05 00:54:17.874373 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.874378 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.874382 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.874386 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.874389 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.874393 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.874397 | orchestrator | 2026-01-05 00:54:17.874401 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-05 00:54:17.874405 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.634) 0:00:01.351 ******** 2026-01-05 00:54:17.874409 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.874413 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.874417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.874420 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.874424 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.874428 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.874432 | orchestrator | 2026-01-05 00:54:17.874435 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-05 00:54:17.874439 | orchestrator | Monday 05 January 2026 00:49:25 +0000 (0:00:00.718) 0:00:02.070 ******** 2026-01-05 00:54:17.874443 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.874447 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.874450 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.874454 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.874458 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.874487 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.874500 | orchestrator | 2026-01-05 00:54:17.874504 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-05 00:54:17.874508 | orchestrator | Monday 05 January 2026 00:49:28 +0000 (0:00:02.844) 0:00:04.914 ******** 2026-01-05 00:54:17.874512 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.874516 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.874520 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.874523 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.874527 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.874531 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.874535 | orchestrator | 2026-01-05 00:54:17.874694 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-05 00:54:17.874698 | orchestrator | Monday 05 January 2026 00:49:29 +0000 (0:00:01.691) 0:00:06.606 ******** 2026-01-05 00:54:17.874702 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.874706 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.874710 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.874713 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.874717 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.874721 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.874725 | orchestrator | 2026-01-05 00:54:17.874729 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-05 00:54:17.874738 | orchestrator | Monday 05 January 2026 00:49:31 +0000 (0:00:01.633) 0:00:08.240 ******** 2026-01-05 00:54:17.874742 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.874746 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.874750 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.874754 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.874757 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.874761 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.874765 | orchestrator | 2026-01-05 00:54:17.874769 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-05 00:54:17.874772 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.606) 0:00:08.846 ******** 2026-01-05 00:54:17.874776 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.874780 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.874783 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.874787 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.874791 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.874806 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.874810 | orchestrator | 2026-01-05 00:54:17.874814 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-05 00:54:17.874818 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.806) 0:00:09.653 ******** 2026-01-05 00:54:17.874822 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:54:17.874826 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:54:17.874830 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.874833 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:54:17.874837 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:54:17.874841 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.874844 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:54:17.874848 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:54:17.874852 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.874856 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:54:17.874869 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:54:17.874873 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.874877 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:54:17.874880 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:54:17.874884 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.874888 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:54:17.874892 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:54:17.874896 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.874899 | orchestrator | 2026-01-05 00:54:17.874903 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-05 00:54:17.874907 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.896) 0:00:10.549 ******** 2026-01-05 00:54:17.874910 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.874914 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875290 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875313 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875319 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875325 | orchestrator | 2026-01-05 00:54:17.875332 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-05 00:54:17.875340 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:01.839) 0:00:12.389 ******** 2026-01-05 00:54:17.875358 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:17.875367 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:17.875374 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:17.875380 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.875386 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.875392 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.875399 | orchestrator | 2026-01-05 00:54:17.875405 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-05 00:54:17.875412 | orchestrator | Monday 05 January 2026 00:49:36 +0000 (0:00:01.218) 0:00:13.608 ******** 2026-01-05 00:54:17.875418 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.875425 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.875431 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.875438 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.875444 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.875450 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.875456 | orchestrator | 2026-01-05 00:54:17.875463 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-05 00:54:17.875468 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:05.904) 0:00:19.512 ******** 2026-01-05 00:54:17.875474 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.875480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875486 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875493 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875499 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875505 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875512 | orchestrator | 2026-01-05 00:54:17.875519 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-05 00:54:17.875524 | orchestrator | Monday 05 January 2026 00:49:45 +0000 (0:00:02.258) 0:00:21.771 ******** 2026-01-05 00:54:17.875528 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.875532 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875536 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875540 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875543 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875549 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875555 | orchestrator | 2026-01-05 00:54:17.875562 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-05 00:54:17.875569 | orchestrator | Monday 05 January 2026 00:49:47 +0000 (0:00:02.750) 0:00:24.521 ******** 2026-01-05 00:54:17.875576 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.875582 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875588 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875595 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875607 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875672 | orchestrator | 2026-01-05 00:54:17.875680 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-05 00:54:17.875687 | orchestrator | Monday 05 January 2026 00:49:49 +0000 (0:00:01.243) 0:00:25.765 ******** 2026-01-05 00:54:17.875693 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-05 00:54:17.875700 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-05 00:54:17.875707 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-05 00:54:17.875714 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-05 00:54:17.875720 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.875727 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-05 00:54:17.875743 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-05 00:54:17.875759 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875765 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-05 00:54:17.875772 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-05 00:54:17.875799 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875806 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-05 00:54:17.875813 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-05 00:54:17.875819 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875825 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875832 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-05 00:54:17.875838 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-05 00:54:17.875844 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875850 | orchestrator | 2026-01-05 00:54:17.875857 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-05 00:54:17.875879 | orchestrator | Monday 05 January 2026 00:49:50 +0000 (0:00:01.610) 0:00:27.375 ******** 2026-01-05 00:54:17.875886 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.875892 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875898 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875904 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875915 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875920 | orchestrator | 2026-01-05 00:54:17.875924 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-05 00:54:17.875934 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:00.803) 0:00:28.179 ******** 2026-01-05 00:54:17.875939 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.875944 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.875948 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.875952 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.875956 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.875960 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.875965 | orchestrator | 2026-01-05 00:54:17.875969 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-05 00:54:17.875974 | orchestrator | 2026-01-05 00:54:17.875978 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-05 00:54:17.875982 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:01.493) 0:00:29.673 ******** 2026-01-05 00:54:17.875987 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.875991 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.875995 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876000 | orchestrator | 2026-01-05 00:54:17.876004 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-05 00:54:17.876008 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:02.399) 0:00:32.073 ******** 2026-01-05 00:54:17.876013 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876017 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876021 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876026 | orchestrator | 2026-01-05 00:54:17.876037 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-05 00:54:17.876042 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:01.446) 0:00:33.520 ******** 2026-01-05 00:54:17.876046 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876049 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876053 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876057 | orchestrator | 2026-01-05 00:54:17.876061 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-05 00:54:17.876064 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:01.199) 0:00:34.720 ******** 2026-01-05 00:54:17.876068 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876072 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876076 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876079 | orchestrator | 2026-01-05 00:54:17.876083 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-05 00:54:17.876087 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.967) 0:00:35.687 ******** 2026-01-05 00:54:17.876095 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.876099 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876103 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876107 | orchestrator | 2026-01-05 00:54:17.876111 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-05 00:54:17.876114 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:00.473) 0:00:36.161 ******** 2026-01-05 00:54:17.876118 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876122 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876126 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876129 | orchestrator | 2026-01-05 00:54:17.876133 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-05 00:54:17.876137 | orchestrator | Monday 05 January 2026 00:50:01 +0000 (0:00:01.926) 0:00:38.087 ******** 2026-01-05 00:54:17.876141 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876145 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876148 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876152 | orchestrator | 2026-01-05 00:54:17.876156 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-05 00:54:17.876160 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:01.706) 0:00:39.794 ******** 2026-01-05 00:54:17.876163 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:17.876167 | orchestrator | 2026-01-05 00:54:17.876171 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-05 00:54:17.876175 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:00.779) 0:00:40.574 ******** 2026-01-05 00:54:17.876179 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876202 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876206 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876210 | orchestrator | 2026-01-05 00:54:17.876213 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-05 00:54:17.876217 | orchestrator | Monday 05 January 2026 00:50:09 +0000 (0:00:05.529) 0:00:46.103 ******** 2026-01-05 00:54:17.876221 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876225 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876228 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876232 | orchestrator | 2026-01-05 00:54:17.876236 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-05 00:54:17.876240 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:00.905) 0:00:47.009 ******** 2026-01-05 00:54:17.876243 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876247 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876251 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876255 | orchestrator | 2026-01-05 00:54:17.876258 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-05 00:54:17.876262 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:01.174) 0:00:48.183 ******** 2026-01-05 00:54:17.876266 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876270 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876273 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876277 | orchestrator | 2026-01-05 00:54:17.876284 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-05 00:54:17.876291 | orchestrator | Monday 05 January 2026 00:50:13 +0000 (0:00:02.050) 0:00:50.233 ******** 2026-01-05 00:54:17.876295 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.876299 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876303 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876307 | orchestrator | 2026-01-05 00:54:17.876310 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-05 00:54:17.876314 | orchestrator | Monday 05 January 2026 00:50:14 +0000 (0:00:01.113) 0:00:51.347 ******** 2026-01-05 00:54:17.876318 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.876326 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876329 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876333 | orchestrator | 2026-01-05 00:54:17.876343 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-05 00:54:17.876347 | orchestrator | Monday 05 January 2026 00:50:15 +0000 (0:00:00.858) 0:00:52.206 ******** 2026-01-05 00:54:17.876351 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876355 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876359 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876362 | orchestrator | 2026-01-05 00:54:17.876366 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-05 00:54:17.876370 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:02.469) 0:00:54.676 ******** 2026-01-05 00:54:17.876374 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876378 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876381 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876385 | orchestrator | 2026-01-05 00:54:17.876389 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-05 00:54:17.876393 | orchestrator | Monday 05 January 2026 00:50:21 +0000 (0:00:03.164) 0:00:57.841 ******** 2026-01-05 00:54:17.876397 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876400 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876404 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876408 | orchestrator | 2026-01-05 00:54:17.876412 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-05 00:54:17.876416 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:01.024) 0:00:58.866 ******** 2026-01-05 00:54:17.876419 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:54:17.876424 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:54:17.876428 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:54:17.876432 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:54:17.876436 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:54:17.876439 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:54:17.876443 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:54:17.876447 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:54:17.876451 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:54:17.876454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:54:17.876458 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:54:17.876462 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:54:17.876466 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876470 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876473 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876477 | orchestrator | 2026-01-05 00:54:17.876481 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-05 00:54:17.876488 | orchestrator | Monday 05 January 2026 00:51:05 +0000 (0:00:43.796) 0:01:42.663 ******** 2026-01-05 00:54:17.876492 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.876495 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876499 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876503 | orchestrator | 2026-01-05 00:54:17.876507 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-05 00:54:17.876510 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:00.397) 0:01:43.060 ******** 2026-01-05 00:54:17.876514 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876518 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876522 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876525 | orchestrator | 2026-01-05 00:54:17.876529 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-05 00:54:17.876533 | orchestrator | Monday 05 January 2026 00:51:07 +0000 (0:00:01.116) 0:01:44.176 ******** 2026-01-05 00:54:17.876537 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876541 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876547 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876551 | orchestrator | 2026-01-05 00:54:17.876557 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-05 00:54:17.876561 | orchestrator | Monday 05 January 2026 00:51:09 +0000 (0:00:01.717) 0:01:45.894 ******** 2026-01-05 00:54:17.876572 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876576 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876580 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876584 | orchestrator | 2026-01-05 00:54:17.876588 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-05 00:54:17.876592 | orchestrator | Monday 05 January 2026 00:51:34 +0000 (0:00:25.728) 0:02:11.622 ******** 2026-01-05 00:54:17.876595 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876599 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876603 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876607 | orchestrator | 2026-01-05 00:54:17.876610 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-05 00:54:17.876614 | orchestrator | Monday 05 January 2026 00:51:35 +0000 (0:00:01.015) 0:02:12.638 ******** 2026-01-05 00:54:17.876618 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876622 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876626 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876639 | orchestrator | 2026-01-05 00:54:17.876644 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-05 00:54:17.876648 | orchestrator | Monday 05 January 2026 00:51:36 +0000 (0:00:00.633) 0:02:13.271 ******** 2026-01-05 00:54:17.876651 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876655 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876659 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876663 | orchestrator | 2026-01-05 00:54:17.876666 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-05 00:54:17.876670 | orchestrator | Monday 05 January 2026 00:51:37 +0000 (0:00:00.850) 0:02:14.122 ******** 2026-01-05 00:54:17.876674 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876680 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876687 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876691 | orchestrator | 2026-01-05 00:54:17.876695 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-05 00:54:17.876699 | orchestrator | Monday 05 January 2026 00:51:38 +0000 (0:00:01.381) 0:02:15.504 ******** 2026-01-05 00:54:17.876703 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876706 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876719 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876725 | orchestrator | 2026-01-05 00:54:17.876736 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-05 00:54:17.876740 | orchestrator | Monday 05 January 2026 00:51:39 +0000 (0:00:00.445) 0:02:15.949 ******** 2026-01-05 00:54:17.876748 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876755 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876760 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876764 | orchestrator | 2026-01-05 00:54:17.876768 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-05 00:54:17.876771 | orchestrator | Monday 05 January 2026 00:51:39 +0000 (0:00:00.662) 0:02:16.612 ******** 2026-01-05 00:54:17.876775 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876779 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876783 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876786 | orchestrator | 2026-01-05 00:54:17.876790 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-05 00:54:17.876794 | orchestrator | Monday 05 January 2026 00:51:40 +0000 (0:00:00.801) 0:02:17.413 ******** 2026-01-05 00:54:17.876800 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876806 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876810 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876814 | orchestrator | 2026-01-05 00:54:17.876817 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-05 00:54:17.876821 | orchestrator | Monday 05 January 2026 00:51:41 +0000 (0:00:01.129) 0:02:18.543 ******** 2026-01-05 00:54:17.876825 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:17.876831 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:17.876837 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:17.876841 | orchestrator | 2026-01-05 00:54:17.876847 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-05 00:54:17.876853 | orchestrator | Monday 05 January 2026 00:51:42 +0000 (0:00:00.773) 0:02:19.316 ******** 2026-01-05 00:54:17.876859 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.876865 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876871 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876876 | orchestrator | 2026-01-05 00:54:17.876882 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-05 00:54:17.876887 | orchestrator | Monday 05 January 2026 00:51:42 +0000 (0:00:00.344) 0:02:19.661 ******** 2026-01-05 00:54:17.876893 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.876899 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.876905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.876911 | orchestrator | 2026-01-05 00:54:17.876917 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-05 00:54:17.876923 | orchestrator | Monday 05 January 2026 00:51:43 +0000 (0:00:00.617) 0:02:20.278 ******** 2026-01-05 00:54:17.876929 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876935 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876941 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876948 | orchestrator | 2026-01-05 00:54:17.876954 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-05 00:54:17.876960 | orchestrator | Monday 05 January 2026 00:51:45 +0000 (0:00:01.577) 0:02:21.855 ******** 2026-01-05 00:54:17.876966 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.876972 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.876978 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.876984 | orchestrator | 2026-01-05 00:54:17.876991 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-05 00:54:17.876997 | orchestrator | Monday 05 January 2026 00:51:45 +0000 (0:00:00.752) 0:02:22.608 ******** 2026-01-05 00:54:17.877003 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:54:17.877018 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:54:17.877026 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:54:17.877032 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:54:17.877044 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:54:17.877050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:54:17.877056 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:54:17.877063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:54:17.877067 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:54:17.877071 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-05 00:54:17.877076 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:54:17.877082 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:54:17.877088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-05 00:54:17.877094 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:54:17.877100 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:54:17.877106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:54:17.877113 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:54:17.877119 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:54:17.877126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:54:17.877132 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:54:17.877138 | orchestrator | 2026-01-05 00:54:17.877145 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-05 00:54:17.877152 | orchestrator | 2026-01-05 00:54:17.877158 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-05 00:54:17.877164 | orchestrator | Monday 05 January 2026 00:51:49 +0000 (0:00:03.310) 0:02:25.919 ******** 2026-01-05 00:54:17.877169 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:17.877173 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:17.877177 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:17.877181 | orchestrator | 2026-01-05 00:54:17.877184 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-05 00:54:17.877188 | orchestrator | Monday 05 January 2026 00:51:49 +0000 (0:00:00.807) 0:02:26.726 ******** 2026-01-05 00:54:17.877192 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:17.877196 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:17.877202 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:17.877207 | orchestrator | 2026-01-05 00:54:17.877211 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-05 00:54:17.877215 | orchestrator | Monday 05 January 2026 00:51:51 +0000 (0:00:01.442) 0:02:28.169 ******** 2026-01-05 00:54:17.877220 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:17.877227 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:17.877231 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:17.877235 | orchestrator | 2026-01-05 00:54:17.877240 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-05 00:54:17.877247 | orchestrator | Monday 05 January 2026 00:51:51 +0000 (0:00:00.349) 0:02:28.519 ******** 2026-01-05 00:54:17.877251 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:54:17.877255 | orchestrator | 2026-01-05 00:54:17.877259 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-05 00:54:17.877447 | orchestrator | Monday 05 January 2026 00:51:52 +0000 (0:00:00.804) 0:02:29.324 ******** 2026-01-05 00:54:17.877460 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.877465 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.877469 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.877473 | orchestrator | 2026-01-05 00:54:17.877477 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-05 00:54:17.877481 | orchestrator | Monday 05 January 2026 00:51:52 +0000 (0:00:00.356) 0:02:29.680 ******** 2026-01-05 00:54:17.877485 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.877489 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.877493 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.877496 | orchestrator | 2026-01-05 00:54:17.877500 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-05 00:54:17.877504 | orchestrator | Monday 05 January 2026 00:51:53 +0000 (0:00:00.343) 0:02:30.023 ******** 2026-01-05 00:54:17.877509 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.877513 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.877517 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.877521 | orchestrator | 2026-01-05 00:54:17.877525 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-05 00:54:17.877529 | orchestrator | Monday 05 January 2026 00:51:53 +0000 (0:00:00.332) 0:02:30.355 ******** 2026-01-05 00:54:17.877533 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.877537 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.877544 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.877548 | orchestrator | 2026-01-05 00:54:17.877557 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-05 00:54:17.877562 | orchestrator | Monday 05 January 2026 00:51:54 +0000 (0:00:01.033) 0:02:31.389 ******** 2026-01-05 00:54:17.877566 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.877570 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.877574 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.877578 | orchestrator | 2026-01-05 00:54:17.877626 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-05 00:54:17.877660 | orchestrator | Monday 05 January 2026 00:51:55 +0000 (0:00:01.200) 0:02:32.590 ******** 2026-01-05 00:54:17.877665 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.877669 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.877672 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.877676 | orchestrator | 2026-01-05 00:54:17.877680 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-05 00:54:17.877684 | orchestrator | Monday 05 January 2026 00:51:57 +0000 (0:00:01.307) 0:02:33.897 ******** 2026-01-05 00:54:17.877688 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:54:17.877692 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:54:17.877695 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:54:17.877699 | orchestrator | 2026-01-05 00:54:17.877703 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:54:17.877707 | orchestrator | 2026-01-05 00:54:17.877711 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:54:17.877716 | orchestrator | Monday 05 January 2026 00:52:07 +0000 (0:00:10.819) 0:02:44.717 ******** 2026-01-05 00:54:17.877723 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.877731 | orchestrator | 2026-01-05 00:54:17.877735 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:54:17.877738 | orchestrator | Monday 05 January 2026 00:52:08 +0000 (0:00:00.944) 0:02:45.661 ******** 2026-01-05 00:54:17.877742 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.877748 | orchestrator | 2026-01-05 00:54:17.877771 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:54:17.877776 | orchestrator | Monday 05 January 2026 00:52:09 +0000 (0:00:00.618) 0:02:46.279 ******** 2026-01-05 00:54:17.877780 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:54:17.877788 | orchestrator | 2026-01-05 00:54:17.877792 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:54:17.877796 | orchestrator | Monday 05 January 2026 00:52:10 +0000 (0:00:00.526) 0:02:46.806 ******** 2026-01-05 00:54:17.877800 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.877804 | orchestrator | 2026-01-05 00:54:17.877811 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:54:17.877818 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:01.045) 0:02:47.851 ******** 2026-01-05 00:54:17.877825 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.877831 | orchestrator | 2026-01-05 00:54:17.877837 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:54:17.877843 | orchestrator | Monday 05 January 2026 00:52:11 +0000 (0:00:00.808) 0:02:48.659 ******** 2026-01-05 00:54:17.877849 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:54:17.877855 | orchestrator | 2026-01-05 00:54:17.877861 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:54:17.877867 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:01.819) 0:02:50.479 ******** 2026-01-05 00:54:17.877873 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:54:17.877879 | orchestrator | 2026-01-05 00:54:17.877885 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:54:17.877891 | orchestrator | Monday 05 January 2026 00:52:14 +0000 (0:00:01.041) 0:02:51.520 ******** 2026-01-05 00:54:17.877897 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.877903 | orchestrator | 2026-01-05 00:54:17.877909 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:54:17.877916 | orchestrator | Monday 05 January 2026 00:52:15 +0000 (0:00:00.921) 0:02:52.442 ******** 2026-01-05 00:54:17.877922 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.877928 | orchestrator | 2026-01-05 00:54:17.877934 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-05 00:54:17.877941 | orchestrator | 2026-01-05 00:54:17.877947 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-05 00:54:17.877954 | orchestrator | Monday 05 January 2026 00:52:16 +0000 (0:00:00.628) 0:02:53.070 ******** 2026-01-05 00:54:17.877960 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.877967 | orchestrator | 2026-01-05 00:54:17.877971 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-05 00:54:17.877975 | orchestrator | Monday 05 January 2026 00:52:16 +0000 (0:00:00.196) 0:02:53.267 ******** 2026-01-05 00:54:17.877979 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:54:17.877983 | orchestrator | 2026-01-05 00:54:17.877987 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-05 00:54:17.877990 | orchestrator | Monday 05 January 2026 00:52:16 +0000 (0:00:00.288) 0:02:53.555 ******** 2026-01-05 00:54:17.877994 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.877998 | orchestrator | 2026-01-05 00:54:17.878003 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-05 00:54:17.878009 | orchestrator | Monday 05 January 2026 00:52:17 +0000 (0:00:01.110) 0:02:54.666 ******** 2026-01-05 00:54:17.878051 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.878057 | orchestrator | 2026-01-05 00:54:17.878062 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-05 00:54:17.878066 | orchestrator | Monday 05 January 2026 00:52:20 +0000 (0:00:02.527) 0:02:57.193 ******** 2026-01-05 00:54:17.878071 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.878075 | orchestrator | 2026-01-05 00:54:17.878079 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-05 00:54:17.878084 | orchestrator | Monday 05 January 2026 00:52:21 +0000 (0:00:00.971) 0:02:58.165 ******** 2026-01-05 00:54:17.878095 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.878100 | orchestrator | 2026-01-05 00:54:17.878109 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-05 00:54:17.878120 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:00.800) 0:02:58.965 ******** 2026-01-05 00:54:17.878128 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.878133 | orchestrator | 2026-01-05 00:54:17.878137 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-05 00:54:17.878141 | orchestrator | Monday 05 January 2026 00:52:31 +0000 (0:00:09.262) 0:03:08.227 ******** 2026-01-05 00:54:17.878146 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.878150 | orchestrator | 2026-01-05 00:54:17.878157 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-05 00:54:17.878164 | orchestrator | Monday 05 January 2026 00:52:47 +0000 (0:00:16.463) 0:03:24.690 ******** 2026-01-05 00:54:17.878169 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.878173 | orchestrator | 2026-01-05 00:54:17.878178 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-05 00:54:17.878182 | orchestrator | 2026-01-05 00:54:17.878187 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-05 00:54:17.878191 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:00.617) 0:03:25.308 ******** 2026-01-05 00:54:17.878196 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.878200 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.878203 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.878207 | orchestrator | 2026-01-05 00:54:17.878211 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-05 00:54:17.878215 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:00.452) 0:03:25.761 ******** 2026-01-05 00:54:17.878218 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878222 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.878226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.878232 | orchestrator | 2026-01-05 00:54:17.878238 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-05 00:54:17.878245 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:00.447) 0:03:26.208 ******** 2026-01-05 00:54:17.878249 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:17.878253 | orchestrator | 2026-01-05 00:54:17.878257 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-05 00:54:17.878261 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:00.796) 0:03:27.005 ******** 2026-01-05 00:54:17.878264 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878268 | orchestrator | 2026-01-05 00:54:17.878272 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-05 00:54:17.878276 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.890) 0:03:27.895 ******** 2026-01-05 00:54:17.878280 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878284 | orchestrator | 2026-01-05 00:54:17.878291 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-05 00:54:17.878297 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.831) 0:03:28.727 ******** 2026-01-05 00:54:17.878302 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878307 | orchestrator | 2026-01-05 00:54:17.878458 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-05 00:54:17.878472 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:00.154) 0:03:28.881 ******** 2026-01-05 00:54:17.878478 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878483 | orchestrator | 2026-01-05 00:54:17.878490 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-05 00:54:17.878496 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:01.016) 0:03:29.898 ******** 2026-01-05 00:54:17.878502 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878508 | orchestrator | 2026-01-05 00:54:17.878514 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-05 00:54:17.878521 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.144) 0:03:30.043 ******** 2026-01-05 00:54:17.878533 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878539 | orchestrator | 2026-01-05 00:54:17.878546 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-05 00:54:17.878552 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.114) 0:03:30.158 ******** 2026-01-05 00:54:17.878557 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878561 | orchestrator | 2026-01-05 00:54:17.878565 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-05 00:54:17.878569 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.119) 0:03:30.278 ******** 2026-01-05 00:54:17.878572 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878576 | orchestrator | 2026-01-05 00:54:17.878580 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-05 00:54:17.878584 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.111) 0:03:30.389 ******** 2026-01-05 00:54:17.878589 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878595 | orchestrator | 2026-01-05 00:54:17.878600 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-05 00:54:17.878610 | orchestrator | Monday 05 January 2026 00:52:59 +0000 (0:00:05.497) 0:03:35.887 ******** 2026-01-05 00:54:17.878618 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-05 00:54:17.878624 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-05 00:54:17.878644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-05 00:54:17.878651 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-05 00:54:17.878704 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-05 00:54:17.878711 | orchestrator | 2026-01-05 00:54:17.878717 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-05 00:54:17.878728 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:43.079) 0:04:18.967 ******** 2026-01-05 00:54:17.878740 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878746 | orchestrator | 2026-01-05 00:54:17.878753 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-05 00:54:17.878759 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:01.601) 0:04:20.568 ******** 2026-01-05 00:54:17.878763 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878767 | orchestrator | 2026-01-05 00:54:17.878770 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-05 00:54:17.878774 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:01.445) 0:04:22.014 ******** 2026-01-05 00:54:17.878778 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:54:17.878798 | orchestrator | 2026-01-05 00:54:17.878803 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-05 00:54:17.878807 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:01.116) 0:04:23.130 ******** 2026-01-05 00:54:17.878810 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878814 | orchestrator | 2026-01-05 00:54:17.878818 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-05 00:54:17.878821 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.161) 0:04:23.292 ******** 2026-01-05 00:54:17.878825 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-05 00:54:17.878829 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-05 00:54:17.878833 | orchestrator | 2026-01-05 00:54:17.878837 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-05 00:54:17.878841 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:02.222) 0:04:25.515 ******** 2026-01-05 00:54:17.878845 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.878852 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.878872 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.878886 | orchestrator | 2026-01-05 00:54:17.878895 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-05 00:54:17.878901 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:00.447) 0:04:25.962 ******** 2026-01-05 00:54:17.878907 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.878912 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.878918 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.878942 | orchestrator | 2026-01-05 00:54:17.878948 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-05 00:54:17.878954 | orchestrator | 2026-01-05 00:54:17.878961 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-05 00:54:17.878967 | orchestrator | Monday 05 January 2026 00:53:50 +0000 (0:00:01.218) 0:04:27.181 ******** 2026-01-05 00:54:17.878973 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:17.878979 | orchestrator | 2026-01-05 00:54:17.878985 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-05 00:54:17.878991 | orchestrator | Monday 05 January 2026 00:53:50 +0000 (0:00:00.175) 0:04:27.356 ******** 2026-01-05 00:54:17.878998 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:54:17.879005 | orchestrator | 2026-01-05 00:54:17.879011 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-05 00:54:17.879018 | orchestrator | Monday 05 January 2026 00:53:50 +0000 (0:00:00.248) 0:04:27.605 ******** 2026-01-05 00:54:17.879041 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:17.879047 | orchestrator | 2026-01-05 00:54:17.879051 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-05 00:54:17.879054 | orchestrator | 2026-01-05 00:54:17.879059 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-05 00:54:17.879066 | orchestrator | Monday 05 January 2026 00:53:58 +0000 (0:00:07.333) 0:04:34.939 ******** 2026-01-05 00:54:17.879073 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:54:17.879077 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:54:17.879081 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:54:17.879084 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:17.879088 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:17.879092 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:17.879096 | orchestrator | 2026-01-05 00:54:17.879101 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-05 00:54:17.879106 | orchestrator | Monday 05 January 2026 00:53:59 +0000 (0:00:01.373) 0:04:36.312 ******** 2026-01-05 00:54:17.879110 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:54:17.879115 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:54:17.879119 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:54:17.879123 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:54:17.879128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:54:17.879132 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:54:17.879138 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:54:17.879145 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:54:17.879151 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:54:17.879155 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:54:17.879160 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:54:17.879164 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:54:17.879182 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:54:17.879190 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:54:17.879194 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:54:17.879198 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:54:17.879201 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:54:17.879205 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:54:17.879209 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:54:17.879212 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:54:17.879217 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:54:17.879224 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:54:17.879231 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:54:17.879235 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:54:17.879239 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:54:17.879242 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:54:17.879246 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:54:17.879250 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:54:17.879254 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:54:17.879257 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:54:17.879261 | orchestrator | 2026-01-05 00:54:17.879265 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-05 00:54:17.879268 | orchestrator | Monday 05 January 2026 00:54:14 +0000 (0:00:15.356) 0:04:51.669 ******** 2026-01-05 00:54:17.879272 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.879276 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.879280 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.879283 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.879287 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.879291 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.879295 | orchestrator | 2026-01-05 00:54:17.879298 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-05 00:54:17.879302 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:01.154) 0:04:52.824 ******** 2026-01-05 00:54:17.879306 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:54:17.879309 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:54:17.879313 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:54:17.879317 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:17.879321 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:17.879325 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:17.879332 | orchestrator | 2026-01-05 00:54:17.879339 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:54:17.879343 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:54:17.879349 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 00:54:17.879354 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:54:17.879360 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:54:17.879364 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:54:17.879371 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:54:17.879377 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:54:17.879382 | orchestrator | 2026-01-05 00:54:17.879385 | orchestrator | 2026-01-05 00:54:17.879389 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:54:17.879393 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:00.705) 0:04:53.529 ******** 2026-01-05 00:54:17.879397 | orchestrator | =============================================================================== 2026-01-05 00:54:17.879451 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.80s 2026-01-05 00:54:17.879456 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.08s 2026-01-05 00:54:17.879460 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.73s 2026-01-05 00:54:17.879470 | orchestrator | kubectl : Install required packages ------------------------------------ 16.46s 2026-01-05 00:54:17.879477 | orchestrator | Manage labels ---------------------------------------------------------- 15.36s 2026-01-05 00:54:17.879483 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.82s 2026-01-05 00:54:17.879488 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.26s 2026-01-05 00:54:17.879492 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.33s 2026-01-05 00:54:17.879496 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.90s 2026-01-05 00:54:17.879499 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 5.53s 2026-01-05 00:54:17.879503 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.50s 2026-01-05 00:54:17.879507 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.31s 2026-01-05 00:54:17.879511 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.17s 2026-01-05 00:54:17.879514 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.84s 2026-01-05 00:54:17.879518 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.75s 2026-01-05 00:54:17.879522 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.53s 2026-01-05 00:54:17.879525 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.47s 2026-01-05 00:54:17.879529 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.40s 2026-01-05 00:54:17.879533 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.26s 2026-01-05 00:54:17.879537 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.22s 2026-01-05 00:54:17.879540 | orchestrator | 2026-01-05 00:54:17 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:17.879544 | orchestrator | 2026-01-05 00:54:17 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:17.879550 | orchestrator | 2026-01-05 00:54:17 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:17.881882 | orchestrator | 2026-01-05 00:54:17 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:17.881988 | orchestrator | 2026-01-05 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:20.920298 | orchestrator | 2026-01-05 00:54:20 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:20.920606 | orchestrator | 2026-01-05 00:54:20 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:20.923762 | orchestrator | 2026-01-05 00:54:20 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:20.924363 | orchestrator | 2026-01-05 00:54:20 | INFO  | Task 5a8dd03c-5e56-400c-a7be-27c48ccbd6ae is in state STARTED 2026-01-05 00:54:20.925309 | orchestrator | 2026-01-05 00:54:20 | INFO  | Task 4575e532-ddbb-45c8-ba87-2ded58f28298 is in state STARTED 2026-01-05 00:54:20.926131 | orchestrator | 2026-01-05 00:54:20 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:20.926374 | orchestrator | 2026-01-05 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:24.047717 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:24.049826 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:24.052777 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:24.055564 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 5a8dd03c-5e56-400c-a7be-27c48ccbd6ae is in state STARTED 2026-01-05 00:54:24.058992 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 4575e532-ddbb-45c8-ba87-2ded58f28298 is in state STARTED 2026-01-05 00:54:24.059815 | orchestrator | 2026-01-05 00:54:24 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:24.059854 | orchestrator | 2026-01-05 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:27.117083 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:27.118426 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:27.120493 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:27.121566 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 5a8dd03c-5e56-400c-a7be-27c48ccbd6ae is in state SUCCESS 2026-01-05 00:54:27.123088 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 4575e532-ddbb-45c8-ba87-2ded58f28298 is in state STARTED 2026-01-05 00:54:27.124937 | orchestrator | 2026-01-05 00:54:27 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:27.124993 | orchestrator | 2026-01-05 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:30.168516 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:30.170699 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:30.172307 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:30.175759 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task 4575e532-ddbb-45c8-ba87-2ded58f28298 is in state STARTED 2026-01-05 00:54:30.177339 | orchestrator | 2026-01-05 00:54:30 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:30.178515 | orchestrator | 2026-01-05 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:33.230506 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:33.232690 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:33.234854 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:33.235529 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 4575e532-ddbb-45c8-ba87-2ded58f28298 is in state SUCCESS 2026-01-05 00:54:33.236976 | orchestrator | 2026-01-05 00:54:33 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:33.237356 | orchestrator | 2026-01-05 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:36.281142 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:36.281267 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:36.282515 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:36.283584 | orchestrator | 2026-01-05 00:54:36 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:36.284847 | orchestrator | 2026-01-05 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:39.325481 | orchestrator | 2026-01-05 00:54:39 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:39.326277 | orchestrator | 2026-01-05 00:54:39 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:39.327177 | orchestrator | 2026-01-05 00:54:39 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:39.328448 | orchestrator | 2026-01-05 00:54:39 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:39.328486 | orchestrator | 2026-01-05 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:42.372792 | orchestrator | 2026-01-05 00:54:42 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:42.373084 | orchestrator | 2026-01-05 00:54:42 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:42.373794 | orchestrator | 2026-01-05 00:54:42 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:42.374575 | orchestrator | 2026-01-05 00:54:42 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:42.374641 | orchestrator | 2026-01-05 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:45.417422 | orchestrator | 2026-01-05 00:54:45 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:45.417525 | orchestrator | 2026-01-05 00:54:45 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:45.419189 | orchestrator | 2026-01-05 00:54:45 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:45.420048 | orchestrator | 2026-01-05 00:54:45 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:45.420084 | orchestrator | 2026-01-05 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:48.462945 | orchestrator | 2026-01-05 00:54:48 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:48.464239 | orchestrator | 2026-01-05 00:54:48 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:48.465509 | orchestrator | 2026-01-05 00:54:48 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:48.466717 | orchestrator | 2026-01-05 00:54:48 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:48.466762 | orchestrator | 2026-01-05 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:51.517776 | orchestrator | 2026-01-05 00:54:51 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:51.520463 | orchestrator | 2026-01-05 00:54:51 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:51.522503 | orchestrator | 2026-01-05 00:54:51 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:51.524048 | orchestrator | 2026-01-05 00:54:51 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:51.524092 | orchestrator | 2026-01-05 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:54.554784 | orchestrator | 2026-01-05 00:54:54 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:54.557073 | orchestrator | 2026-01-05 00:54:54 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state STARTED 2026-01-05 00:54:54.557508 | orchestrator | 2026-01-05 00:54:54 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:54.558277 | orchestrator | 2026-01-05 00:54:54 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:54.558320 | orchestrator | 2026-01-05 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:54:57.615356 | orchestrator | 2026-01-05 00:54:57 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:54:57.615765 | orchestrator | 2026-01-05 00:54:57 | INFO  | Task bc460e67-278a-4750-b31b-0765110271aa is in state SUCCESS 2026-01-05 00:54:57.618107 | orchestrator | 2026-01-05 00:54:57.618179 | orchestrator | 2026-01-05 00:54:57.618193 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-05 00:54:57.618204 | orchestrator | 2026-01-05 00:54:57.618214 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:54:57.618224 | orchestrator | Monday 05 January 2026 00:54:22 +0000 (0:00:00.204) 0:00:00.204 ******** 2026-01-05 00:54:57.618233 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:54:57.618243 | orchestrator | 2026-01-05 00:54:57.618253 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:54:57.618262 | orchestrator | Monday 05 January 2026 00:54:23 +0000 (0:00:00.996) 0:00:01.200 ******** 2026-01-05 00:54:57.618271 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:57.618280 | orchestrator | 2026-01-05 00:54:57.618289 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-05 00:54:57.618298 | orchestrator | Monday 05 January 2026 00:54:25 +0000 (0:00:01.613) 0:00:02.813 ******** 2026-01-05 00:54:57.618307 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:57.618316 | orchestrator | 2026-01-05 00:54:57.618324 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:54:57.618334 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:54:57.618345 | orchestrator | 2026-01-05 00:54:57.618353 | orchestrator | 2026-01-05 00:54:57.618362 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:54:57.618371 | orchestrator | Monday 05 January 2026 00:54:25 +0000 (0:00:00.549) 0:00:03.363 ******** 2026-01-05 00:54:57.618380 | orchestrator | =============================================================================== 2026-01-05 00:54:57.618388 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.61s 2026-01-05 00:54:57.618398 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.00s 2026-01-05 00:54:57.618430 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.55s 2026-01-05 00:54:57.618439 | orchestrator | 2026-01-05 00:54:57.618448 | orchestrator | 2026-01-05 00:54:57.618457 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:54:57.618465 | orchestrator | 2026-01-05 00:54:57.618474 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:54:57.618482 | orchestrator | Monday 05 January 2026 00:54:22 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-01-05 00:54:57.618491 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:57.618500 | orchestrator | 2026-01-05 00:54:57.618509 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:54:57.618518 | orchestrator | Monday 05 January 2026 00:54:22 +0000 (0:00:00.839) 0:00:01.063 ******** 2026-01-05 00:54:57.618526 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:57.618535 | orchestrator | 2026-01-05 00:54:57.618544 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:54:57.618582 | orchestrator | Monday 05 January 2026 00:54:23 +0000 (0:00:01.035) 0:00:02.099 ******** 2026-01-05 00:54:57.618590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:54:57.618599 | orchestrator | 2026-01-05 00:54:57.618608 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:54:57.618617 | orchestrator | Monday 05 January 2026 00:54:24 +0000 (0:00:00.868) 0:00:02.967 ******** 2026-01-05 00:54:57.618625 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:57.618634 | orchestrator | 2026-01-05 00:54:57.618642 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:54:57.618651 | orchestrator | Monday 05 January 2026 00:54:26 +0000 (0:00:01.901) 0:00:04.868 ******** 2026-01-05 00:54:57.618660 | orchestrator | changed: [testbed-manager] 2026-01-05 00:54:57.618669 | orchestrator | 2026-01-05 00:54:57.618677 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:54:57.618686 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:00.513) 0:00:05.382 ******** 2026-01-05 00:54:57.618694 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:54:57.618703 | orchestrator | 2026-01-05 00:54:57.618712 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:54:57.618720 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:01.510) 0:00:06.892 ******** 2026-01-05 00:54:57.618729 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:54:57.618737 | orchestrator | 2026-01-05 00:54:57.618746 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:54:57.618754 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:00.989) 0:00:07.882 ******** 2026-01-05 00:54:57.618763 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:57.618772 | orchestrator | 2026-01-05 00:54:57.618780 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:54:57.618789 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.503) 0:00:08.385 ******** 2026-01-05 00:54:57.618798 | orchestrator | ok: [testbed-manager] 2026-01-05 00:54:57.618806 | orchestrator | 2026-01-05 00:54:57.618815 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:54:57.618823 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:54:57.618832 | orchestrator | 2026-01-05 00:54:57.618841 | orchestrator | 2026-01-05 00:54:57.618849 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:54:57.618858 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:00.383) 0:00:08.769 ******** 2026-01-05 00:54:57.618867 | orchestrator | =============================================================================== 2026-01-05 00:54:57.618875 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.90s 2026-01-05 00:54:57.618884 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-01-05 00:54:57.618900 | orchestrator | Create .kube directory -------------------------------------------------- 1.04s 2026-01-05 00:54:57.618925 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.99s 2026-01-05 00:54:57.618934 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.87s 2026-01-05 00:54:57.618943 | orchestrator | Get home directory of operator user ------------------------------------- 0.84s 2026-01-05 00:54:57.618952 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.51s 2026-01-05 00:54:57.618961 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.50s 2026-01-05 00:54:57.618975 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.38s 2026-01-05 00:54:57.618990 | orchestrator | 2026-01-05 00:54:57.619003 | orchestrator | 2026-01-05 00:54:57.619018 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-05 00:54:57.619033 | orchestrator | 2026-01-05 00:54:57.619047 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-05 00:54:57.619061 | orchestrator | Monday 05 January 2026 00:52:21 +0000 (0:00:00.168) 0:00:00.168 ******** 2026-01-05 00:54:57.619075 | orchestrator | ok: [localhost] => { 2026-01-05 00:54:57.619090 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-05 00:54:57.619103 | orchestrator | } 2026-01-05 00:54:57.619117 | orchestrator | 2026-01-05 00:54:57.619131 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-05 00:54:57.619144 | orchestrator | Monday 05 January 2026 00:52:21 +0000 (0:00:00.070) 0:00:00.239 ******** 2026-01-05 00:54:57.619160 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-05 00:54:57.619177 | orchestrator | ...ignoring 2026-01-05 00:54:57.619192 | orchestrator | 2026-01-05 00:54:57.619208 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-05 00:54:57.619223 | orchestrator | Monday 05 January 2026 00:52:26 +0000 (0:00:05.254) 0:00:05.493 ******** 2026-01-05 00:54:57.619237 | orchestrator | skipping: [localhost] 2026-01-05 00:54:57.619251 | orchestrator | 2026-01-05 00:54:57.619264 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-05 00:54:57.619365 | orchestrator | Monday 05 January 2026 00:52:27 +0000 (0:00:00.111) 0:00:05.605 ******** 2026-01-05 00:54:57.619383 | orchestrator | ok: [localhost] 2026-01-05 00:54:57.619393 | orchestrator | 2026-01-05 00:54:57.619402 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:54:57.619410 | orchestrator | 2026-01-05 00:54:57.619419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:54:57.619428 | orchestrator | Monday 05 January 2026 00:52:27 +0000 (0:00:00.220) 0:00:05.825 ******** 2026-01-05 00:54:57.619437 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:57.619446 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:57.619454 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:57.619463 | orchestrator | 2026-01-05 00:54:57.619471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:54:57.619480 | orchestrator | Monday 05 January 2026 00:52:27 +0000 (0:00:00.313) 0:00:06.139 ******** 2026-01-05 00:54:57.619489 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-05 00:54:57.619502 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-05 00:54:57.619511 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-05 00:54:57.619520 | orchestrator | 2026-01-05 00:54:57.619529 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-05 00:54:57.619537 | orchestrator | 2026-01-05 00:54:57.619631 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:54:57.619644 | orchestrator | Monday 05 January 2026 00:52:28 +0000 (0:00:01.181) 0:00:07.321 ******** 2026-01-05 00:54:57.619663 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:57.619672 | orchestrator | 2026-01-05 00:54:57.619681 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 00:54:57.619690 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:01.093) 0:00:08.414 ******** 2026-01-05 00:54:57.619698 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:57.619707 | orchestrator | 2026-01-05 00:54:57.619716 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-05 00:54:57.619724 | orchestrator | Monday 05 January 2026 00:52:31 +0000 (0:00:01.643) 0:00:10.058 ******** 2026-01-05 00:54:57.619733 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.619742 | orchestrator | 2026-01-05 00:54:57.619751 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-05 00:54:57.619759 | orchestrator | Monday 05 January 2026 00:52:32 +0000 (0:00:01.250) 0:00:11.308 ******** 2026-01-05 00:54:57.619768 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.619776 | orchestrator | 2026-01-05 00:54:57.619785 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-05 00:54:57.619793 | orchestrator | Monday 05 January 2026 00:52:33 +0000 (0:00:00.689) 0:00:11.998 ******** 2026-01-05 00:54:57.619802 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.619811 | orchestrator | 2026-01-05 00:54:57.619819 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-05 00:54:57.619828 | orchestrator | Monday 05 January 2026 00:52:33 +0000 (0:00:00.306) 0:00:12.304 ******** 2026-01-05 00:54:57.619836 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.619845 | orchestrator | 2026-01-05 00:54:57.619854 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:54:57.619862 | orchestrator | Monday 05 January 2026 00:52:34 +0000 (0:00:00.829) 0:00:13.134 ******** 2026-01-05 00:54:57.619871 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:57.619880 | orchestrator | 2026-01-05 00:54:57.620027 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 00:54:57.620054 | orchestrator | Monday 05 January 2026 00:52:35 +0000 (0:00:00.936) 0:00:14.071 ******** 2026-01-05 00:54:57.620064 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:57.620073 | orchestrator | 2026-01-05 00:54:57.620082 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-05 00:54:57.620090 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:00.863) 0:00:14.934 ******** 2026-01-05 00:54:57.620099 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.620108 | orchestrator | 2026-01-05 00:54:57.620117 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-05 00:54:57.620125 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:00.562) 0:00:15.497 ******** 2026-01-05 00:54:57.620133 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.620141 | orchestrator | 2026-01-05 00:54:57.620149 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-05 00:54:57.620157 | orchestrator | Monday 05 January 2026 00:52:37 +0000 (0:00:00.455) 0:00:15.952 ******** 2026-01-05 00:54:57.620169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620215 | orchestrator | 2026-01-05 00:54:57.620224 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-05 00:54:57.620237 | orchestrator | Monday 05 January 2026 00:52:38 +0000 (0:00:01.352) 0:00:17.305 ******** 2026-01-05 00:54:57.620260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620320 | orchestrator | 2026-01-05 00:54:57.620334 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-05 00:54:57.620348 | orchestrator | Monday 05 January 2026 00:52:41 +0000 (0:00:02.888) 0:00:20.193 ******** 2026-01-05 00:54:57.620363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:54:57.620377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:54:57.620391 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 00:54:57.620405 | orchestrator | 2026-01-05 00:54:57.620420 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-05 00:54:57.620428 | orchestrator | Monday 05 January 2026 00:52:44 +0000 (0:00:03.158) 0:00:23.352 ******** 2026-01-05 00:54:57.620436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:54:57.620444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:54:57.620452 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 00:54:57.620460 | orchestrator | 2026-01-05 00:54:57.620468 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-05 00:54:57.620482 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:04.959) 0:00:28.312 ******** 2026-01-05 00:54:57.620490 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:54:57.620498 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:54:57.620506 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 00:54:57.620513 | orchestrator | 2026-01-05 00:54:57.620521 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-05 00:54:57.620529 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:01.517) 0:00:29.830 ******** 2026-01-05 00:54:57.620537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:54:57.620569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:54:57.620578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 00:54:57.620586 | orchestrator | 2026-01-05 00:54:57.620594 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-05 00:54:57.620602 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:03.175) 0:00:33.006 ******** 2026-01-05 00:54:57.620610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:54:57.620618 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:54:57.620626 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 00:54:57.620634 | orchestrator | 2026-01-05 00:54:57.620642 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-05 00:54:57.620649 | orchestrator | Monday 05 January 2026 00:52:56 +0000 (0:00:01.613) 0:00:34.620 ******** 2026-01-05 00:54:57.620657 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:54:57.620665 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:54:57.620673 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 00:54:57.620681 | orchestrator | 2026-01-05 00:54:57.620689 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 00:54:57.620697 | orchestrator | Monday 05 January 2026 00:52:57 +0000 (0:00:01.875) 0:00:36.495 ******** 2026-01-05 00:54:57.620705 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.620713 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:57.620721 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:57.620729 | orchestrator | 2026-01-05 00:54:57.620737 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-05 00:54:57.620744 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:00.409) 0:00:36.904 ******** 2026-01-05 00:54:57.620758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:54:57.620797 | orchestrator | 2026-01-05 00:54:57.620805 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-05 00:54:57.620813 | orchestrator | Monday 05 January 2026 00:53:00 +0000 (0:00:01.890) 0:00:38.795 ******** 2026-01-05 00:54:57.620821 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:57.620829 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:57.620837 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:57.620845 | orchestrator | 2026-01-05 00:54:57.620852 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-05 00:54:57.620861 | orchestrator | Monday 05 January 2026 00:53:01 +0000 (0:00:01.304) 0:00:40.099 ******** 2026-01-05 00:54:57.620868 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:57.620876 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:57.620884 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:57.620892 | orchestrator | 2026-01-05 00:54:57.620900 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-05 00:54:57.620908 | orchestrator | Monday 05 January 2026 00:53:10 +0000 (0:00:09.403) 0:00:49.502 ******** 2026-01-05 00:54:57.620915 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:57.620923 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:57.620931 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:57.620939 | orchestrator | 2026-01-05 00:54:57.620947 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:54:57.620955 | orchestrator | 2026-01-05 00:54:57.620962 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:54:57.620974 | orchestrator | Monday 05 January 2026 00:53:12 +0000 (0:00:01.597) 0:00:51.100 ******** 2026-01-05 00:54:57.620982 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:57.620990 | orchestrator | 2026-01-05 00:54:57.620998 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:54:57.621006 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:00.890) 0:00:51.991 ******** 2026-01-05 00:54:57.621014 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:57.621022 | orchestrator | 2026-01-05 00:54:57.621029 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:54:57.621037 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:00.284) 0:00:52.276 ******** 2026-01-05 00:54:57.621045 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:57.621053 | orchestrator | 2026-01-05 00:54:57.621061 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:54:57.621069 | orchestrator | Monday 05 January 2026 00:53:15 +0000 (0:00:02.020) 0:00:54.296 ******** 2026-01-05 00:54:57.621084 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:57.621092 | orchestrator | 2026-01-05 00:54:57.621100 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:54:57.621107 | orchestrator | 2026-01-05 00:54:57.621115 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:54:57.621123 | orchestrator | Monday 05 January 2026 00:54:13 +0000 (0:00:57.470) 0:01:51.767 ******** 2026-01-05 00:54:57.621131 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:57.621139 | orchestrator | 2026-01-05 00:54:57.621147 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:54:57.621155 | orchestrator | Monday 05 January 2026 00:54:14 +0000 (0:00:01.089) 0:01:52.856 ******** 2026-01-05 00:54:57.621162 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:57.621170 | orchestrator | 2026-01-05 00:54:57.621178 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:54:57.621186 | orchestrator | Monday 05 January 2026 00:54:14 +0000 (0:00:00.440) 0:01:53.297 ******** 2026-01-05 00:54:57.621194 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:57.621202 | orchestrator | 2026-01-05 00:54:57.621209 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:54:57.621218 | orchestrator | Monday 05 January 2026 00:54:21 +0000 (0:00:07.057) 0:02:00.354 ******** 2026-01-05 00:54:57.621226 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:57.621233 | orchestrator | 2026-01-05 00:54:57.621242 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 00:54:57.621249 | orchestrator | 2026-01-05 00:54:57.621257 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 00:54:57.621265 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:12.002) 0:02:12.357 ******** 2026-01-05 00:54:57.621273 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:57.621281 | orchestrator | 2026-01-05 00:54:57.621293 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 00:54:57.621301 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:00.624) 0:02:12.982 ******** 2026-01-05 00:54:57.621309 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:57.621317 | orchestrator | 2026-01-05 00:54:57.621325 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 00:54:57.621333 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:00.255) 0:02:13.237 ******** 2026-01-05 00:54:57.621340 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:57.621348 | orchestrator | 2026-01-05 00:54:57.621356 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 00:54:57.621364 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:01.723) 0:02:14.960 ******** 2026-01-05 00:54:57.621372 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:57.621379 | orchestrator | 2026-01-05 00:54:57.621387 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-05 00:54:57.621399 | orchestrator | 2026-01-05 00:54:57.621413 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-05 00:54:57.621432 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:16.816) 0:02:31.777 ******** 2026-01-05 00:54:57.621450 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:57.621464 | orchestrator | 2026-01-05 00:54:57.621477 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-05 00:54:57.621490 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:00.547) 0:02:32.325 ******** 2026-01-05 00:54:57.621503 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 00:54:57.621517 | orchestrator | enable_outward_rabbitmq_True 2026-01-05 00:54:57.621530 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 00:54:57.621543 | orchestrator | outward_rabbitmq_restart 2026-01-05 00:54:57.621581 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:54:57.621594 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:54:57.621617 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:54:57.621631 | orchestrator | 2026-01-05 00:54:57.621645 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-05 00:54:57.621657 | orchestrator | skipping: no hosts matched 2026-01-05 00:54:57.621670 | orchestrator | 2026-01-05 00:54:57.621683 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-05 00:54:57.621691 | orchestrator | skipping: no hosts matched 2026-01-05 00:54:57.621699 | orchestrator | 2026-01-05 00:54:57.621707 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-05 00:54:57.621715 | orchestrator | skipping: no hosts matched 2026-01-05 00:54:57.621723 | orchestrator | 2026-01-05 00:54:57.621731 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:54:57.621740 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-05 00:54:57.621748 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 00:54:57.621761 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:54:57.621769 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:54:57.621777 | orchestrator | 2026-01-05 00:54:57.621785 | orchestrator | 2026-01-05 00:54:57.621793 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:54:57.621801 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:02.777) 0:02:35.102 ******** 2026-01-05 00:54:57.621809 | orchestrator | =============================================================================== 2026-01-05 00:54:57.621816 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.29s 2026-01-05 00:54:57.621824 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.80s 2026-01-05 00:54:57.621832 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.40s 2026-01-05 00:54:57.621840 | orchestrator | Check RabbitMQ service -------------------------------------------------- 5.25s 2026-01-05 00:54:57.621847 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.96s 2026-01-05 00:54:57.621855 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.18s 2026-01-05 00:54:57.621863 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.16s 2026-01-05 00:54:57.621871 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.89s 2026-01-05 00:54:57.621878 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.78s 2026-01-05 00:54:57.621886 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.60s 2026-01-05 00:54:57.621894 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.89s 2026-01-05 00:54:57.621902 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.88s 2026-01-05 00:54:57.621910 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.64s 2026-01-05 00:54:57.621917 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.61s 2026-01-05 00:54:57.621925 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.60s 2026-01-05 00:54:57.621933 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.52s 2026-01-05 00:54:57.621941 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.35s 2026-01-05 00:54:57.621956 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.30s 2026-01-05 00:54:57.621964 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.25s 2026-01-05 00:54:57.621972 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.18s 2026-01-05 00:54:57.621990 | orchestrator | 2026-01-05 00:54:57 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:54:57.621999 | orchestrator | 2026-01-05 00:54:57 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:54:57.622007 | orchestrator | 2026-01-05 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:00.660499 | orchestrator | 2026-01-05 00:55:00 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:00.662798 | orchestrator | 2026-01-05 00:55:00 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:00.666586 | orchestrator | 2026-01-05 00:55:00 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:00.666637 | orchestrator | 2026-01-05 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:03.707943 | orchestrator | 2026-01-05 00:55:03 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:03.710311 | orchestrator | 2026-01-05 00:55:03 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:03.712689 | orchestrator | 2026-01-05 00:55:03 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:03.712752 | orchestrator | 2026-01-05 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:06.755669 | orchestrator | 2026-01-05 00:55:06 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:06.755911 | orchestrator | 2026-01-05 00:55:06 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:06.756806 | orchestrator | 2026-01-05 00:55:06 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:06.759962 | orchestrator | 2026-01-05 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:09.796473 | orchestrator | 2026-01-05 00:55:09 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:09.799649 | orchestrator | 2026-01-05 00:55:09 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:09.800667 | orchestrator | 2026-01-05 00:55:09 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:09.800703 | orchestrator | 2026-01-05 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:12.849240 | orchestrator | 2026-01-05 00:55:12 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:12.850767 | orchestrator | 2026-01-05 00:55:12 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:12.852749 | orchestrator | 2026-01-05 00:55:12 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:12.852844 | orchestrator | 2026-01-05 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:15.896876 | orchestrator | 2026-01-05 00:55:15 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:15.899536 | orchestrator | 2026-01-05 00:55:15 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:15.902055 | orchestrator | 2026-01-05 00:55:15 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:15.902572 | orchestrator | 2026-01-05 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:18.944475 | orchestrator | 2026-01-05 00:55:18 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:18.944618 | orchestrator | 2026-01-05 00:55:18 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:18.945379 | orchestrator | 2026-01-05 00:55:18 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:18.945401 | orchestrator | 2026-01-05 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:21.985112 | orchestrator | 2026-01-05 00:55:21 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:21.985202 | orchestrator | 2026-01-05 00:55:21 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:21.985452 | orchestrator | 2026-01-05 00:55:21 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:21.985657 | orchestrator | 2026-01-05 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:25.033602 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:25.035843 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:25.035891 | orchestrator | 2026-01-05 00:55:25 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:25.035901 | orchestrator | 2026-01-05 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:28.082395 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:28.082604 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:28.082620 | orchestrator | 2026-01-05 00:55:28 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:28.082658 | orchestrator | 2026-01-05 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:31.131156 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:31.133888 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:31.134936 | orchestrator | 2026-01-05 00:55:31 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:31.134988 | orchestrator | 2026-01-05 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:34.180725 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:34.181128 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:34.184112 | orchestrator | 2026-01-05 00:55:34 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:34.184131 | orchestrator | 2026-01-05 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:37.223915 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:37.228859 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:37.229220 | orchestrator | 2026-01-05 00:55:37 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:37.230241 | orchestrator | 2026-01-05 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:40.279733 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:40.281317 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:40.283075 | orchestrator | 2026-01-05 00:55:40 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:40.283191 | orchestrator | 2026-01-05 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:43.330593 | orchestrator | 2026-01-05 00:55:43 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:43.330856 | orchestrator | 2026-01-05 00:55:43 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:43.332715 | orchestrator | 2026-01-05 00:55:43 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:43.332759 | orchestrator | 2026-01-05 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:46.387221 | orchestrator | 2026-01-05 00:55:46 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:46.389338 | orchestrator | 2026-01-05 00:55:46 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:46.391246 | orchestrator | 2026-01-05 00:55:46 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:46.391418 | orchestrator | 2026-01-05 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:49.450118 | orchestrator | 2026-01-05 00:55:49 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:49.451663 | orchestrator | 2026-01-05 00:55:49 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:49.453744 | orchestrator | 2026-01-05 00:55:49 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:49.453810 | orchestrator | 2026-01-05 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:52.513771 | orchestrator | 2026-01-05 00:55:52 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:52.517050 | orchestrator | 2026-01-05 00:55:52 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:52.519578 | orchestrator | 2026-01-05 00:55:52 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:52.519635 | orchestrator | 2026-01-05 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:55.572974 | orchestrator | 2026-01-05 00:55:55 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:55.574332 | orchestrator | 2026-01-05 00:55:55 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:55.576948 | orchestrator | 2026-01-05 00:55:55 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:55.577030 | orchestrator | 2026-01-05 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:55:58.611138 | orchestrator | 2026-01-05 00:55:58 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:55:58.612721 | orchestrator | 2026-01-05 00:55:58 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state STARTED 2026-01-05 00:55:58.613031 | orchestrator | 2026-01-05 00:55:58 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:55:58.613172 | orchestrator | 2026-01-05 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:01.652731 | orchestrator | 2026-01-05 00:56:01 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:01.655330 | orchestrator | 2026-01-05 00:56:01 | INFO  | Task 81014e09-4fd4-420a-986c-c979db8fb294 is in state SUCCESS 2026-01-05 00:56:01.657201 | orchestrator | 2026-01-05 00:56:01.657245 | orchestrator | 2026-01-05 00:56:01.657274 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:56:01.657282 | orchestrator | 2026-01-05 00:56:01.657289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:56:01.657297 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-01-05 00:56:01.657304 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:01.657312 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:01.657319 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:01.657326 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.657332 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.657339 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.657346 | orchestrator | 2026-01-05 00:56:01.657364 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:56:01.657374 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:00.812) 0:00:01.037 ******** 2026-01-05 00:56:01.657385 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-05 00:56:01.657395 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-05 00:56:01.657406 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-05 00:56:01.657417 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-05 00:56:01.657449 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-05 00:56:01.657459 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-05 00:56:01.657469 | orchestrator | 2026-01-05 00:56:01.657480 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-05 00:56:01.657488 | orchestrator | 2026-01-05 00:56:01.657495 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-05 00:56:01.657501 | orchestrator | Monday 05 January 2026 00:53:26 +0000 (0:00:01.055) 0:00:02.092 ******** 2026-01-05 00:56:01.657509 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:01.657517 | orchestrator | 2026-01-05 00:56:01.657523 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-05 00:56:01.657529 | orchestrator | Monday 05 January 2026 00:53:28 +0000 (0:00:01.956) 0:00:04.048 ******** 2026-01-05 00:56:01.657537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657587 | orchestrator | 2026-01-05 00:56:01.657605 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-05 00:56:01.657612 | orchestrator | Monday 05 January 2026 00:53:30 +0000 (0:00:01.945) 0:00:05.994 ******** 2026-01-05 00:56:01.657623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657661 | orchestrator | 2026-01-05 00:56:01.657668 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-05 00:56:01.657681 | orchestrator | Monday 05 January 2026 00:53:32 +0000 (0:00:01.679) 0:00:07.674 ******** 2026-01-05 00:56:01.657688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.657847 | orchestrator | 2026-01-05 00:56:01.658671 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-05 00:56:01.658757 | orchestrator | Monday 05 January 2026 00:53:33 +0000 (0:00:01.770) 0:00:09.444 ******** 2026-01-05 00:56:01.658768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658818 | orchestrator | 2026-01-05 00:56:01.658832 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-05 00:56:01.658839 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:02.033) 0:00:11.478 ******** 2026-01-05 00:56:01.658851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.658904 | orchestrator | 2026-01-05 00:56:01.658910 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-05 00:56:01.658965 | orchestrator | Monday 05 January 2026 00:53:37 +0000 (0:00:01.808) 0:00:13.287 ******** 2026-01-05 00:56:01.658972 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:01.658980 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.658986 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:01.658992 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:01.658999 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.659005 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.659011 | orchestrator | 2026-01-05 00:56:01.659018 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-05 00:56:01.659024 | orchestrator | Monday 05 January 2026 00:53:40 +0000 (0:00:02.964) 0:00:16.252 ******** 2026-01-05 00:56:01.659030 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-05 00:56:01.659037 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-05 00:56:01.659043 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-05 00:56:01.659049 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-05 00:56:01.659055 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-05 00:56:01.659062 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-05 00:56:01.659068 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:56:01.659074 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:56:01.659086 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:56:01.659093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:56:01.659099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:56:01.659105 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 00:56:01.659112 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:56:01.659124 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:56:01.659130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:56:01.659136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:56:01.659143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:56:01.659149 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 00:56:01.659160 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:56:01.659168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:56:01.659174 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:56:01.659180 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:56:01.659186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:56:01.659192 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 00:56:01.659198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:56:01.659205 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:56:01.659211 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:56:01.659226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:56:01.659233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:56:01.659239 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:56:01.659245 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 00:56:01.659252 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:56:01.659258 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:56:01.659264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:56:01.659270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:56:01.659277 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:56:01.659283 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 00:56:01.659289 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:56:01.659296 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 00:56:01.659302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:56:01.659308 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-05 00:56:01.659316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:56:01.659322 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 00:56:01.659328 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-05 00:56:01.659339 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-05 00:56:01.659345 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:56:01.659352 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-05 00:56:01.659358 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-05 00:56:01.659373 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-05 00:56:01.659379 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:56:01.659386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:56:01.659392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:56:01.659398 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 00:56:01.659404 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 00:56:01.659411 | orchestrator | 2026-01-05 00:56:01.659417 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:56:01.659423 | orchestrator | Monday 05 January 2026 00:54:03 +0000 (0:00:22.834) 0:00:39.087 ******** 2026-01-05 00:56:01.659446 | orchestrator | 2026-01-05 00:56:01.659453 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:56:01.659459 | orchestrator | Monday 05 January 2026 00:54:03 +0000 (0:00:00.261) 0:00:39.349 ******** 2026-01-05 00:56:01.659465 | orchestrator | 2026-01-05 00:56:01.659471 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:56:01.659477 | orchestrator | Monday 05 January 2026 00:54:03 +0000 (0:00:00.120) 0:00:39.469 ******** 2026-01-05 00:56:01.659484 | orchestrator | 2026-01-05 00:56:01.659490 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:56:01.659496 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:00.165) 0:00:39.635 ******** 2026-01-05 00:56:01.659502 | orchestrator | 2026-01-05 00:56:01.659508 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:56:01.659514 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:00.186) 0:00:39.822 ******** 2026-01-05 00:56:01.659520 | orchestrator | 2026-01-05 00:56:01.659527 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 00:56:01.659533 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:00.170) 0:00:39.992 ******** 2026-01-05 00:56:01.659539 | orchestrator | 2026-01-05 00:56:01.659545 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-05 00:56:01.659551 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:00.164) 0:00:40.157 ******** 2026-01-05 00:56:01.659558 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:56:01.659564 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.659570 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.659576 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:56:01.659582 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:56:01.659588 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.659595 | orchestrator | 2026-01-05 00:56:01.659601 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-05 00:56:01.659609 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:02.241) 0:00:42.399 ******** 2026-01-05 00:56:01.659616 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.659623 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.659630 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:56:01.659638 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:56:01.659645 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.659652 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:56:01.659659 | orchestrator | 2026-01-05 00:56:01.659667 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-05 00:56:01.659675 | orchestrator | 2026-01-05 00:56:01.659683 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:56:01.659694 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:33.854) 0:01:16.253 ******** 2026-01-05 00:56:01.659702 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:01.659709 | orchestrator | 2026-01-05 00:56:01.659716 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:56:01.659724 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:00.789) 0:01:17.042 ******** 2026-01-05 00:56:01.659731 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:01.659738 | orchestrator | 2026-01-05 00:56:01.659745 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-05 00:56:01.659753 | orchestrator | Monday 05 January 2026 00:54:42 +0000 (0:00:00.632) 0:01:17.675 ******** 2026-01-05 00:56:01.659760 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.659768 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.659775 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.659782 | orchestrator | 2026-01-05 00:56:01.659790 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-05 00:56:01.659798 | orchestrator | Monday 05 January 2026 00:54:43 +0000 (0:00:01.225) 0:01:18.900 ******** 2026-01-05 00:56:01.659805 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.659813 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.659821 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.659832 | orchestrator | 2026-01-05 00:56:01.659839 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-05 00:56:01.659845 | orchestrator | Monday 05 January 2026 00:54:43 +0000 (0:00:00.424) 0:01:19.325 ******** 2026-01-05 00:56:01.659851 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.659857 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.659863 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.659870 | orchestrator | 2026-01-05 00:56:01.659876 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-05 00:56:01.659882 | orchestrator | Monday 05 January 2026 00:54:44 +0000 (0:00:00.421) 0:01:19.746 ******** 2026-01-05 00:56:01.659888 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.659894 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.659901 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.659907 | orchestrator | 2026-01-05 00:56:01.659913 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-05 00:56:01.659919 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:00.747) 0:01:20.494 ******** 2026-01-05 00:56:01.659925 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.659932 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.659938 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.659944 | orchestrator | 2026-01-05 00:56:01.659970 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-05 00:56:01.659977 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:00.726) 0:01:21.221 ******** 2026-01-05 00:56:01.659983 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.659989 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.659996 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660002 | orchestrator | 2026-01-05 00:56:01.660008 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-05 00:56:01.660014 | orchestrator | Monday 05 January 2026 00:54:46 +0000 (0:00:00.419) 0:01:21.641 ******** 2026-01-05 00:56:01.660020 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660026 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660033 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660039 | orchestrator | 2026-01-05 00:56:01.660045 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-05 00:56:01.660051 | orchestrator | Monday 05 January 2026 00:54:46 +0000 (0:00:00.462) 0:01:22.104 ******** 2026-01-05 00:56:01.660057 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660064 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660074 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660080 | orchestrator | 2026-01-05 00:56:01.660086 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-05 00:56:01.660093 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:00.503) 0:01:22.607 ******** 2026-01-05 00:56:01.660099 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660105 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660111 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660117 | orchestrator | 2026-01-05 00:56:01.660124 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-05 00:56:01.660130 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:00.719) 0:01:23.327 ******** 2026-01-05 00:56:01.660136 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660142 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660148 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660154 | orchestrator | 2026-01-05 00:56:01.660161 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-05 00:56:01.660167 | orchestrator | Monday 05 January 2026 00:54:48 +0000 (0:00:00.348) 0:01:23.676 ******** 2026-01-05 00:56:01.660173 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660179 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660185 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660191 | orchestrator | 2026-01-05 00:56:01.660198 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-05 00:56:01.660204 | orchestrator | Monday 05 January 2026 00:54:48 +0000 (0:00:00.341) 0:01:24.017 ******** 2026-01-05 00:56:01.660210 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660216 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660222 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660228 | orchestrator | 2026-01-05 00:56:01.660234 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-05 00:56:01.660241 | orchestrator | Monday 05 January 2026 00:54:48 +0000 (0:00:00.328) 0:01:24.346 ******** 2026-01-05 00:56:01.660247 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660253 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660259 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660265 | orchestrator | 2026-01-05 00:56:01.660272 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-05 00:56:01.660278 | orchestrator | Monday 05 January 2026 00:54:49 +0000 (0:00:00.577) 0:01:24.923 ******** 2026-01-05 00:56:01.660284 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660290 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660296 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660302 | orchestrator | 2026-01-05 00:56:01.660308 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-05 00:56:01.660315 | orchestrator | Monday 05 January 2026 00:54:49 +0000 (0:00:00.361) 0:01:25.285 ******** 2026-01-05 00:56:01.660321 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660327 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660333 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660339 | orchestrator | 2026-01-05 00:56:01.660346 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-05 00:56:01.660352 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:00.325) 0:01:25.611 ******** 2026-01-05 00:56:01.660358 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660364 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660370 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660376 | orchestrator | 2026-01-05 00:56:01.660382 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-05 00:56:01.660389 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:00.366) 0:01:25.977 ******** 2026-01-05 00:56:01.660395 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660401 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660412 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660421 | orchestrator | 2026-01-05 00:56:01.660444 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 00:56:01.660450 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:00.341) 0:01:26.319 ******** 2026-01-05 00:56:01.660457 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:01.660463 | orchestrator | 2026-01-05 00:56:01.660469 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-05 00:56:01.660475 | orchestrator | Monday 05 January 2026 00:54:51 +0000 (0:00:00.841) 0:01:27.161 ******** 2026-01-05 00:56:01.660482 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.660491 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.660497 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.660504 | orchestrator | 2026-01-05 00:56:01.660510 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-05 00:56:01.660516 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:00.492) 0:01:27.653 ******** 2026-01-05 00:56:01.660522 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.660529 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.660535 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.660541 | orchestrator | 2026-01-05 00:56:01.660547 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-05 00:56:01.660553 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:00.525) 0:01:28.178 ******** 2026-01-05 00:56:01.660559 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660565 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660572 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660578 | orchestrator | 2026-01-05 00:56:01.660584 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-05 00:56:01.660590 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:00.631) 0:01:28.810 ******** 2026-01-05 00:56:01.660596 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660602 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660608 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660615 | orchestrator | 2026-01-05 00:56:01.660621 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-05 00:56:01.660627 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:00.385) 0:01:29.195 ******** 2026-01-05 00:56:01.660633 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660640 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660646 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660652 | orchestrator | 2026-01-05 00:56:01.660658 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-05 00:56:01.660664 | orchestrator | Monday 05 January 2026 00:54:54 +0000 (0:00:00.473) 0:01:29.669 ******** 2026-01-05 00:56:01.660670 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660676 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660682 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660689 | orchestrator | 2026-01-05 00:56:01.660695 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-05 00:56:01.660701 | orchestrator | Monday 05 January 2026 00:54:54 +0000 (0:00:00.454) 0:01:30.124 ******** 2026-01-05 00:56:01.660707 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660713 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660719 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660726 | orchestrator | 2026-01-05 00:56:01.660732 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-05 00:56:01.660738 | orchestrator | Monday 05 January 2026 00:54:55 +0000 (0:00:00.832) 0:01:30.956 ******** 2026-01-05 00:56:01.660744 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.660750 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.660756 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.660763 | orchestrator | 2026-01-05 00:56:01.660769 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 00:56:01.660779 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:00.549) 0:01:31.506 ******** 2026-01-05 00:56:01.660786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660862 | orchestrator | 2026-01-05 00:56:01.660869 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 00:56:01.660882 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:01.563) 0:01:33.069 ******** 2026-01-05 00:56:01.660889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660954 | orchestrator | 2026-01-05 00:56:01.660961 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 00:56:01.660967 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:04.214) 0:01:37.284 ******** 2026-01-05 00:56:01.660979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.660999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661045 | orchestrator | 2026-01-05 00:56:01.661052 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:56:01.661058 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:02.410) 0:01:39.695 ******** 2026-01-05 00:56:01.661069 | orchestrator | 2026-01-05 00:56:01.661076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:56:01.661082 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.071) 0:01:39.766 ******** 2026-01-05 00:56:01.661088 | orchestrator | 2026-01-05 00:56:01.661094 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:56:01.661100 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.069) 0:01:39.836 ******** 2026-01-05 00:56:01.661107 | orchestrator | 2026-01-05 00:56:01.661113 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 00:56:01.661119 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.069) 0:01:39.905 ******** 2026-01-05 00:56:01.661125 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.661131 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.661137 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.661144 | orchestrator | 2026-01-05 00:56:01.661150 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 00:56:01.661156 | orchestrator | Monday 05 January 2026 00:55:07 +0000 (0:00:03.507) 0:01:43.413 ******** 2026-01-05 00:56:01.661162 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.661168 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.661175 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.661181 | orchestrator | 2026-01-05 00:56:01.661187 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 00:56:01.661193 | orchestrator | Monday 05 January 2026 00:55:11 +0000 (0:00:03.679) 0:01:47.093 ******** 2026-01-05 00:56:01.661199 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.661205 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.661211 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.661218 | orchestrator | 2026-01-05 00:56:01.661224 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 00:56:01.661230 | orchestrator | Monday 05 January 2026 00:55:19 +0000 (0:00:08.247) 0:01:55.341 ******** 2026-01-05 00:56:01.661236 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.661242 | orchestrator | 2026-01-05 00:56:01.661249 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 00:56:01.661255 | orchestrator | Monday 05 January 2026 00:55:20 +0000 (0:00:00.393) 0:01:55.734 ******** 2026-01-05 00:56:01.661261 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.661267 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.661273 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.661280 | orchestrator | 2026-01-05 00:56:01.661286 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 00:56:01.661292 | orchestrator | Monday 05 January 2026 00:55:21 +0000 (0:00:00.903) 0:01:56.638 ******** 2026-01-05 00:56:01.661298 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.661305 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.661311 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.661317 | orchestrator | 2026-01-05 00:56:01.661323 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 00:56:01.661329 | orchestrator | Monday 05 January 2026 00:55:21 +0000 (0:00:00.652) 0:01:57.291 ******** 2026-01-05 00:56:01.661336 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.661342 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.661348 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.661354 | orchestrator | 2026-01-05 00:56:01.661360 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 00:56:01.661366 | orchestrator | Monday 05 January 2026 00:55:22 +0000 (0:00:00.837) 0:01:58.128 ******** 2026-01-05 00:56:01.661373 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.661379 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.661385 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.661391 | orchestrator | 2026-01-05 00:56:01.661397 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 00:56:01.661407 | orchestrator | Monday 05 January 2026 00:55:23 +0000 (0:00:01.012) 0:01:59.140 ******** 2026-01-05 00:56:01.661414 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.661420 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.661451 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.661458 | orchestrator | 2026-01-05 00:56:01.661465 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 00:56:01.661471 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.823) 0:01:59.964 ******** 2026-01-05 00:56:01.661477 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.661483 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.661490 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.661496 | orchestrator | 2026-01-05 00:56:01.661502 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-05 00:56:01.661508 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:00.813) 0:02:00.777 ******** 2026-01-05 00:56:01.661514 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.661524 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.661530 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.661536 | orchestrator | 2026-01-05 00:56:01.661543 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 00:56:01.661549 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:00.355) 0:02:01.132 ******** 2026-01-05 00:56:01.661555 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661575 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661582 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661589 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661606 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661617 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661624 | orchestrator | 2026-01-05 00:56:01.661630 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 00:56:01.661636 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:01.492) 0:02:02.625 ******** 2026-01-05 00:56:01.661646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661659 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661666 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661709 | orchestrator | 2026-01-05 00:56:01.661715 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 00:56:01.661721 | orchestrator | Monday 05 January 2026 00:55:31 +0000 (0:00:04.552) 0:02:07.177 ******** 2026-01-05 00:56:01.661731 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661790 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:56:01.661797 | orchestrator | 2026-01-05 00:56:01.661803 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:56:01.661809 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:03.119) 0:02:10.297 ******** 2026-01-05 00:56:01.661815 | orchestrator | 2026-01-05 00:56:01.661822 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:56:01.661828 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:00.068) 0:02:10.366 ******** 2026-01-05 00:56:01.661834 | orchestrator | 2026-01-05 00:56:01.661840 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 00:56:01.661846 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:00.071) 0:02:10.437 ******** 2026-01-05 00:56:01.661852 | orchestrator | 2026-01-05 00:56:01.661859 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 00:56:01.661865 | orchestrator | Monday 05 January 2026 00:55:35 +0000 (0:00:00.073) 0:02:10.511 ******** 2026-01-05 00:56:01.661871 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.661877 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.661884 | orchestrator | 2026-01-05 00:56:01.661893 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 00:56:01.661899 | orchestrator | Monday 05 January 2026 00:55:41 +0000 (0:00:06.861) 0:02:17.373 ******** 2026-01-05 00:56:01.661906 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.661912 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.661918 | orchestrator | 2026-01-05 00:56:01.661924 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 00:56:01.661930 | orchestrator | Monday 05 January 2026 00:55:48 +0000 (0:00:06.489) 0:02:23.863 ******** 2026-01-05 00:56:01.661937 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:01.661943 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:01.661949 | orchestrator | 2026-01-05 00:56:01.661958 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 00:56:01.661964 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:06.489) 0:02:30.352 ******** 2026-01-05 00:56:01.661971 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:01.661977 | orchestrator | 2026-01-05 00:56:01.661983 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 00:56:01.661989 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:00.141) 0:02:30.493 ******** 2026-01-05 00:56:01.661995 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.662002 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.662008 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.662053 | orchestrator | 2026-01-05 00:56:01.662061 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 00:56:01.662067 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:00.804) 0:02:31.298 ******** 2026-01-05 00:56:01.662073 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.662080 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.662086 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.662092 | orchestrator | 2026-01-05 00:56:01.662099 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 00:56:01.662109 | orchestrator | Monday 05 January 2026 00:55:56 +0000 (0:00:00.732) 0:02:32.031 ******** 2026-01-05 00:56:01.662116 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.662122 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.662128 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.662134 | orchestrator | 2026-01-05 00:56:01.662140 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 00:56:01.662147 | orchestrator | Monday 05 January 2026 00:55:57 +0000 (0:00:00.807) 0:02:32.838 ******** 2026-01-05 00:56:01.662153 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:01.662159 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:01.662165 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:01.662171 | orchestrator | 2026-01-05 00:56:01.662178 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 00:56:01.662184 | orchestrator | Monday 05 January 2026 00:55:58 +0000 (0:00:00.760) 0:02:33.598 ******** 2026-01-05 00:56:01.662190 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.662196 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.662202 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.662209 | orchestrator | 2026-01-05 00:56:01.662215 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 00:56:01.662221 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:00.919) 0:02:34.518 ******** 2026-01-05 00:56:01.662227 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:56:01.662233 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:56:01.662239 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:56:01.662246 | orchestrator | 2026-01-05 00:56:01.662252 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:56:01.662258 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 00:56:01.662265 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-05 00:56:01.662271 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-05 00:56:01.662278 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:56:01.662284 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:56:01.662290 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:56:01.662296 | orchestrator | 2026-01-05 00:56:01.662303 | orchestrator | 2026-01-05 00:56:01.662309 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:56:01.662315 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:00.951) 0:02:35.470 ******** 2026-01-05 00:56:01.662322 | orchestrator | =============================================================================== 2026-01-05 00:56:01.662328 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.85s 2026-01-05 00:56:01.662334 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.83s 2026-01-05 00:56:01.662340 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.74s 2026-01-05 00:56:01.662346 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 10.37s 2026-01-05 00:56:01.662352 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 10.17s 2026-01-05 00:56:01.662358 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.55s 2026-01-05 00:56:01.662365 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.21s 2026-01-05 00:56:01.662375 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.12s 2026-01-05 00:56:01.662385 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.96s 2026-01-05 00:56:01.662391 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.41s 2026-01-05 00:56:01.662397 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.24s 2026-01-05 00:56:01.662404 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.03s 2026-01-05 00:56:01.662410 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.96s 2026-01-05 00:56:01.662422 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.95s 2026-01-05 00:56:01.662443 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.81s 2026-01-05 00:56:01.662449 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.77s 2026-01-05 00:56:01.662455 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.68s 2026-01-05 00:56:01.662461 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.56s 2026-01-05 00:56:01.662467 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2026-01-05 00:56:01.662474 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.23s 2026-01-05 00:56:01.662480 | orchestrator | 2026-01-05 00:56:01 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:01.662486 | orchestrator | 2026-01-05 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:04.699321 | orchestrator | 2026-01-05 00:56:04 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:04.701591 | orchestrator | 2026-01-05 00:56:04 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:04.701665 | orchestrator | 2026-01-05 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:07.744990 | orchestrator | 2026-01-05 00:56:07 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:07.746770 | orchestrator | 2026-01-05 00:56:07 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:07.746846 | orchestrator | 2026-01-05 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:10.801364 | orchestrator | 2026-01-05 00:56:10 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:10.802817 | orchestrator | 2026-01-05 00:56:10 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:10.803125 | orchestrator | 2026-01-05 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:13.859747 | orchestrator | 2026-01-05 00:56:13 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:13.861112 | orchestrator | 2026-01-05 00:56:13 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:13.861307 | orchestrator | 2026-01-05 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:16.933207 | orchestrator | 2026-01-05 00:56:16 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:16.940366 | orchestrator | 2026-01-05 00:56:16 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:16.941753 | orchestrator | 2026-01-05 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:19.990272 | orchestrator | 2026-01-05 00:56:19 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:19.993898 | orchestrator | 2026-01-05 00:56:19 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:19.993957 | orchestrator | 2026-01-05 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:23.031560 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:23.031944 | orchestrator | 2026-01-05 00:56:23 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:23.031967 | orchestrator | 2026-01-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:26.088116 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:26.089346 | orchestrator | 2026-01-05 00:56:26 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:26.090434 | orchestrator | 2026-01-05 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:29.135606 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:29.135952 | orchestrator | 2026-01-05 00:56:29 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:29.136149 | orchestrator | 2026-01-05 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:32.174472 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:32.177419 | orchestrator | 2026-01-05 00:56:32 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:32.177483 | orchestrator | 2026-01-05 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:35.224251 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:35.225798 | orchestrator | 2026-01-05 00:56:35 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:35.225855 | orchestrator | 2026-01-05 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:38.276824 | orchestrator | 2026-01-05 00:56:38 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:38.276937 | orchestrator | 2026-01-05 00:56:38 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:38.276956 | orchestrator | 2026-01-05 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:41.312934 | orchestrator | 2026-01-05 00:56:41 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:41.315593 | orchestrator | 2026-01-05 00:56:41 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:41.315619 | orchestrator | 2026-01-05 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:44.360202 | orchestrator | 2026-01-05 00:56:44 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:44.360677 | orchestrator | 2026-01-05 00:56:44 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:44.360707 | orchestrator | 2026-01-05 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:47.408537 | orchestrator | 2026-01-05 00:56:47 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:47.410001 | orchestrator | 2026-01-05 00:56:47 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:47.410340 | orchestrator | 2026-01-05 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:50.454841 | orchestrator | 2026-01-05 00:56:50 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:50.454926 | orchestrator | 2026-01-05 00:56:50 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:50.454956 | orchestrator | 2026-01-05 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:53.491109 | orchestrator | 2026-01-05 00:56:53 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:53.491706 | orchestrator | 2026-01-05 00:56:53 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:53.491792 | orchestrator | 2026-01-05 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:56.543210 | orchestrator | 2026-01-05 00:56:56 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:56.545361 | orchestrator | 2026-01-05 00:56:56 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:56.545442 | orchestrator | 2026-01-05 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:56:59.589471 | orchestrator | 2026-01-05 00:56:59 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:56:59.591735 | orchestrator | 2026-01-05 00:56:59 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:56:59.591823 | orchestrator | 2026-01-05 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:02.640927 | orchestrator | 2026-01-05 00:57:02 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:02.641225 | orchestrator | 2026-01-05 00:57:02 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:02.641265 | orchestrator | 2026-01-05 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:05.683175 | orchestrator | 2026-01-05 00:57:05 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:05.685275 | orchestrator | 2026-01-05 00:57:05 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:05.685558 | orchestrator | 2026-01-05 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:08.732662 | orchestrator | 2026-01-05 00:57:08 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:08.734610 | orchestrator | 2026-01-05 00:57:08 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:08.734818 | orchestrator | 2026-01-05 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:11.785027 | orchestrator | 2026-01-05 00:57:11 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:11.786225 | orchestrator | 2026-01-05 00:57:11 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:11.786277 | orchestrator | 2026-01-05 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:14.835719 | orchestrator | 2026-01-05 00:57:14 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:14.839224 | orchestrator | 2026-01-05 00:57:14 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:14.839370 | orchestrator | 2026-01-05 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:17.907592 | orchestrator | 2026-01-05 00:57:17 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:17.908658 | orchestrator | 2026-01-05 00:57:17 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:17.908692 | orchestrator | 2026-01-05 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:20.971612 | orchestrator | 2026-01-05 00:57:20 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:20.973013 | orchestrator | 2026-01-05 00:57:20 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:20.974489 | orchestrator | 2026-01-05 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:24.029679 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:24.029737 | orchestrator | 2026-01-05 00:57:24 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:24.029744 | orchestrator | 2026-01-05 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:27.087046 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:27.089643 | orchestrator | 2026-01-05 00:57:27 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:27.089702 | orchestrator | 2026-01-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:30.147726 | orchestrator | 2026-01-05 00:57:30 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:30.148752 | orchestrator | 2026-01-05 00:57:30 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:30.148803 | orchestrator | 2026-01-05 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:33.198982 | orchestrator | 2026-01-05 00:57:33 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:33.199084 | orchestrator | 2026-01-05 00:57:33 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:33.199095 | orchestrator | 2026-01-05 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:36.245153 | orchestrator | 2026-01-05 00:57:36 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:36.245683 | orchestrator | 2026-01-05 00:57:36 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:36.245734 | orchestrator | 2026-01-05 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:39.304379 | orchestrator | 2026-01-05 00:57:39 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:39.307044 | orchestrator | 2026-01-05 00:57:39 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:39.307362 | orchestrator | 2026-01-05 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:42.358007 | orchestrator | 2026-01-05 00:57:42 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:42.359507 | orchestrator | 2026-01-05 00:57:42 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:42.359596 | orchestrator | 2026-01-05 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:45.412741 | orchestrator | 2026-01-05 00:57:45 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:45.414615 | orchestrator | 2026-01-05 00:57:45 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:45.414685 | orchestrator | 2026-01-05 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:48.477431 | orchestrator | 2026-01-05 00:57:48 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:48.477612 | orchestrator | 2026-01-05 00:57:48 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:48.479665 | orchestrator | 2026-01-05 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:51.527537 | orchestrator | 2026-01-05 00:57:51 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:51.528573 | orchestrator | 2026-01-05 00:57:51 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:51.528618 | orchestrator | 2026-01-05 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:54.592974 | orchestrator | 2026-01-05 00:57:54 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:54.593088 | orchestrator | 2026-01-05 00:57:54 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:54.593105 | orchestrator | 2026-01-05 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:57:57.641415 | orchestrator | 2026-01-05 00:57:57 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:57:57.644478 | orchestrator | 2026-01-05 00:57:57 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:57:57.645029 | orchestrator | 2026-01-05 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:00.703612 | orchestrator | 2026-01-05 00:58:00 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:00.707336 | orchestrator | 2026-01-05 00:58:00 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:00.707439 | orchestrator | 2026-01-05 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:03.744902 | orchestrator | 2026-01-05 00:58:03 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:03.745815 | orchestrator | 2026-01-05 00:58:03 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:03.745871 | orchestrator | 2026-01-05 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:06.794452 | orchestrator | 2026-01-05 00:58:06 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:06.795677 | orchestrator | 2026-01-05 00:58:06 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:06.795811 | orchestrator | 2026-01-05 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:09.845329 | orchestrator | 2026-01-05 00:58:09 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:09.846139 | orchestrator | 2026-01-05 00:58:09 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:09.846205 | orchestrator | 2026-01-05 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:12.886434 | orchestrator | 2026-01-05 00:58:12 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:12.886523 | orchestrator | 2026-01-05 00:58:12 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:12.886531 | orchestrator | 2026-01-05 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:15.934168 | orchestrator | 2026-01-05 00:58:15 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:15.936889 | orchestrator | 2026-01-05 00:58:15 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:15.936948 | orchestrator | 2026-01-05 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:18.986933 | orchestrator | 2026-01-05 00:58:18 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:18.988346 | orchestrator | 2026-01-05 00:58:18 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:18.988494 | orchestrator | 2026-01-05 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:22.039769 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:22.040983 | orchestrator | 2026-01-05 00:58:22 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:22.041037 | orchestrator | 2026-01-05 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:25.101769 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:25.104212 | orchestrator | 2026-01-05 00:58:25 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:25.104294 | orchestrator | 2026-01-05 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:28.148806 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:28.151793 | orchestrator | 2026-01-05 00:58:28 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:28.151888 | orchestrator | 2026-01-05 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:31.189411 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:31.189504 | orchestrator | 2026-01-05 00:58:31 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:31.189513 | orchestrator | 2026-01-05 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:34.227013 | orchestrator | 2026-01-05 00:58:34 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:34.227092 | orchestrator | 2026-01-05 00:58:34 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:34.227099 | orchestrator | 2026-01-05 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:37.272848 | orchestrator | 2026-01-05 00:58:37 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:37.274169 | orchestrator | 2026-01-05 00:58:37 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:37.274390 | orchestrator | 2026-01-05 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:40.325772 | orchestrator | 2026-01-05 00:58:40 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:40.328238 | orchestrator | 2026-01-05 00:58:40 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:40.328648 | orchestrator | 2026-01-05 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:43.378488 | orchestrator | 2026-01-05 00:58:43 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:43.378843 | orchestrator | 2026-01-05 00:58:43 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:43.378863 | orchestrator | 2026-01-05 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:46.424596 | orchestrator | 2026-01-05 00:58:46 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:46.425293 | orchestrator | 2026-01-05 00:58:46 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:46.425376 | orchestrator | 2026-01-05 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:49.468824 | orchestrator | 2026-01-05 00:58:49 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:49.470291 | orchestrator | 2026-01-05 00:58:49 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:49.470334 | orchestrator | 2026-01-05 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:52.523560 | orchestrator | 2026-01-05 00:58:52 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:52.525721 | orchestrator | 2026-01-05 00:58:52 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:52.525794 | orchestrator | 2026-01-05 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:55.575761 | orchestrator | 2026-01-05 00:58:55 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:55.575861 | orchestrator | 2026-01-05 00:58:55 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:55.575872 | orchestrator | 2026-01-05 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:58:58.630349 | orchestrator | 2026-01-05 00:58:58 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:58:58.633906 | orchestrator | 2026-01-05 00:58:58 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:58:58.633977 | orchestrator | 2026-01-05 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:01.684438 | orchestrator | 2026-01-05 00:59:01 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:59:01.689704 | orchestrator | 2026-01-05 00:59:01 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:01.689801 | orchestrator | 2026-01-05 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:04.743132 | orchestrator | 2026-01-05 00:59:04 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:59:04.745097 | orchestrator | 2026-01-05 00:59:04 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:04.745704 | orchestrator | 2026-01-05 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:07.784562 | orchestrator | 2026-01-05 00:59:07 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:59:07.785622 | orchestrator | 2026-01-05 00:59:07 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:07.785686 | orchestrator | 2026-01-05 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:10.823683 | orchestrator | 2026-01-05 00:59:10 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:59:10.824387 | orchestrator | 2026-01-05 00:59:10 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:10.824441 | orchestrator | 2026-01-05 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:13.870887 | orchestrator | 2026-01-05 00:59:13 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state STARTED 2026-01-05 00:59:13.870971 | orchestrator | 2026-01-05 00:59:13 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:13.870979 | orchestrator | 2026-01-05 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:16.915854 | orchestrator | 2026-01-05 00:59:16 | INFO  | Task c6c52521-7f12-44cb-99db-fec7e2b83c88 is in state SUCCESS 2026-01-05 00:59:16.918445 | orchestrator | 2026-01-05 00:59:16.918485 | orchestrator | 2026-01-05 00:59:16.918491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:59:16.918496 | orchestrator | 2026-01-05 00:59:16.918500 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:59:16.918517 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.357) 0:00:00.357 ******** 2026-01-05 00:59:16.918521 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.918526 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.918530 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.918534 | orchestrator | 2026-01-05 00:59:16.918538 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:59:16.918541 | orchestrator | Monday 05 January 2026 00:52:02 +0000 (0:00:00.393) 0:00:00.751 ******** 2026-01-05 00:59:16.918546 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-05 00:59:16.918550 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-05 00:59:16.918554 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-05 00:59:16.918558 | orchestrator | 2026-01-05 00:59:16.918561 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-05 00:59:16.918565 | orchestrator | 2026-01-05 00:59:16.918569 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:59:16.918573 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.678) 0:00:01.430 ******** 2026-01-05 00:59:16.918584 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.918588 | orchestrator | 2026-01-05 00:59:16.918592 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-05 00:59:16.918596 | orchestrator | Monday 05 January 2026 00:52:03 +0000 (0:00:00.686) 0:00:02.116 ******** 2026-01-05 00:59:16.918600 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.918604 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.918607 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.918611 | orchestrator | 2026-01-05 00:59:16.918615 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 00:59:16.918619 | orchestrator | Monday 05 January 2026 00:52:04 +0000 (0:00:00.897) 0:00:03.014 ******** 2026-01-05 00:59:16.918623 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.918627 | orchestrator | 2026-01-05 00:59:16.918631 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-05 00:59:16.918635 | orchestrator | Monday 05 January 2026 00:52:06 +0000 (0:00:01.235) 0:00:04.249 ******** 2026-01-05 00:59:16.918638 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.918642 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.918646 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.918650 | orchestrator | 2026-01-05 00:59:16.918654 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-05 00:59:16.918658 | orchestrator | Monday 05 January 2026 00:52:07 +0000 (0:00:01.565) 0:00:05.814 ******** 2026-01-05 00:59:16.918661 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:59:16.918666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:59:16.918669 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:59:16.918674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:59:16.918678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:59:16.918689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:59:16.918693 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:59:16.918696 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:59:16.918700 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:59:16.918704 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:59:16.918711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:59:16.918730 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:59:16.918734 | orchestrator | 2026-01-05 00:59:16.918738 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:59:16.918742 | orchestrator | Monday 05 January 2026 00:52:12 +0000 (0:00:04.778) 0:00:10.593 ******** 2026-01-05 00:59:16.918745 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:59:16.918749 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:59:16.918753 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:59:16.918757 | orchestrator | 2026-01-05 00:59:16.918763 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:59:16.918770 | orchestrator | Monday 05 January 2026 00:52:13 +0000 (0:00:00.917) 0:00:11.510 ******** 2026-01-05 00:59:16.918775 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:59:16.918781 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:59:16.918790 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:59:16.918799 | orchestrator | 2026-01-05 00:59:16.918805 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:59:16.919561 | orchestrator | Monday 05 January 2026 00:52:15 +0000 (0:00:02.573) 0:00:14.084 ******** 2026-01-05 00:59:16.919583 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-05 00:59:16.919588 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.919600 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-05 00:59:16.919604 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.919608 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-05 00:59:16.919612 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.919616 | orchestrator | 2026-01-05 00:59:16.919620 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-05 00:59:16.919624 | orchestrator | Monday 05 January 2026 00:52:17 +0000 (0:00:01.868) 0:00:15.952 ******** 2026-01-05 00:59:16.919629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.919636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.919640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.919650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.919666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.919673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.919678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.919682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.919709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.919714 | orchestrator | 2026-01-05 00:59:16.919718 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-05 00:59:16.919722 | orchestrator | Monday 05 January 2026 00:52:20 +0000 (0:00:03.135) 0:00:19.088 ******** 2026-01-05 00:59:16.919725 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.919734 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.919738 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.919742 | orchestrator | 2026-01-05 00:59:16.919746 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-05 00:59:16.919750 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:01.583) 0:00:20.671 ******** 2026-01-05 00:59:16.919754 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-05 00:59:16.919758 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-05 00:59:16.919761 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-05 00:59:16.919765 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-05 00:59:16.919769 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-05 00:59:16.919773 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-05 00:59:16.919777 | orchestrator | 2026-01-05 00:59:16.919780 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-05 00:59:16.919784 | orchestrator | Monday 05 January 2026 00:52:26 +0000 (0:00:03.581) 0:00:24.253 ******** 2026-01-05 00:59:16.919788 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.919792 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.919796 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.919801 | orchestrator | 2026-01-05 00:59:16.919805 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-05 00:59:16.919809 | orchestrator | Monday 05 January 2026 00:52:27 +0000 (0:00:01.767) 0:00:26.020 ******** 2026-01-05 00:59:16.919835 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.919840 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.919843 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.919847 | orchestrator | 2026-01-05 00:59:16.919851 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-05 00:59:16.920373 | orchestrator | Monday 05 January 2026 00:52:30 +0000 (0:00:02.346) 0:00:28.367 ******** 2026-01-05 00:59:16.920388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.920421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.920427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.920432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:59:16.920442 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.920446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.920526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.920533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.920537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:59:16.920541 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.920555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.920560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.920567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.920571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:59:16.920575 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.920579 | orchestrator | 2026-01-05 00:59:16.920583 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-05 00:59:16.920587 | orchestrator | Monday 05 January 2026 00:52:31 +0000 (0:00:01.716) 0:00:30.084 ******** 2026-01-05 00:59:16.920593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.920626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:59:16.920632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.920640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:59:16.920652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.920663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df', '__omit_place_holder__dcb2d5c9863be3cf348843a963a6b70a33df91df'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:59:16.920667 | orchestrator | 2026-01-05 00:59:16.920671 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-05 00:59:16.920675 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:04.456) 0:00:34.540 ******** 2026-01-05 00:59:16.920679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.920718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.920756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.920761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.920765 | orchestrator | 2026-01-05 00:59:16.920770 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-05 00:59:16.920777 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:03.836) 0:00:38.377 ******** 2026-01-05 00:59:16.920787 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:59:16.920795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:59:16.920801 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:59:16.920808 | orchestrator | 2026-01-05 00:59:16.920814 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-05 00:59:16.920823 | orchestrator | Monday 05 January 2026 00:52:43 +0000 (0:00:03.308) 0:00:41.685 ******** 2026-01-05 00:59:16.920829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:59:16.920835 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:59:16.920842 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:59:16.920848 | orchestrator | 2026-01-05 00:59:16.920875 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-05 00:59:16.920915 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:07.251) 0:00:48.937 ******** 2026-01-05 00:59:16.920920 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.920923 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.920927 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.920931 | orchestrator | 2026-01-05 00:59:16.920935 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-05 00:59:16.920939 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.869) 0:00:49.806 ******** 2026-01-05 00:59:16.920942 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:59:16.920947 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:59:16.920951 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:59:16.920955 | orchestrator | 2026-01-05 00:59:16.920959 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-05 00:59:16.920963 | orchestrator | Monday 05 January 2026 00:52:55 +0000 (0:00:03.940) 0:00:53.747 ******** 2026-01-05 00:59:16.920966 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:59:16.920970 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:59:16.920974 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:59:16.920978 | orchestrator | 2026-01-05 00:59:16.920982 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-05 00:59:16.920985 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:02.841) 0:00:56.588 ******** 2026-01-05 00:59:16.920989 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-05 00:59:16.920993 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-05 00:59:16.920997 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-05 00:59:16.921001 | orchestrator | 2026-01-05 00:59:16.921004 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-05 00:59:16.921008 | orchestrator | Monday 05 January 2026 00:53:00 +0000 (0:00:02.131) 0:00:58.720 ******** 2026-01-05 00:59:16.921012 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-05 00:59:16.921016 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-05 00:59:16.921020 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-05 00:59:16.921037 | orchestrator | 2026-01-05 00:59:16.921043 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:59:16.921049 | orchestrator | Monday 05 January 2026 00:53:02 +0000 (0:00:02.070) 0:01:00.790 ******** 2026-01-05 00:59:16.921055 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.921469 | orchestrator | 2026-01-05 00:59:16.921474 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-05 00:59:16.921479 | orchestrator | Monday 05 January 2026 00:53:03 +0000 (0:00:01.003) 0:01:01.794 ******** 2026-01-05 00:59:16.921491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.921497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.921514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.921520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.921525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.921530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.921536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.921544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.921549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.921553 | orchestrator | 2026-01-05 00:59:16.921557 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-05 00:59:16.921562 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:03.706) 0:01:05.501 ******** 2026-01-05 00:59:16.921575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921605 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.921609 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.921613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921635 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.921639 | orchestrator | 2026-01-05 00:59:16.921643 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-05 00:59:16.921647 | orchestrator | Monday 05 January 2026 00:53:08 +0000 (0:00:00.891) 0:01:06.393 ******** 2026-01-05 00:59:16.921652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921668 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.921672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921716 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.921755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921772 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.921776 | orchestrator | 2026-01-05 00:59:16.921780 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 00:59:16.921784 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:01.548) 0:01:07.942 ******** 2026-01-05 00:59:16.921788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.921803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.921807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.921811 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.921815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922235 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.922241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922298 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922302 | orchestrator | 2026-01-05 00:59:16.922306 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 00:59:16.922311 | orchestrator | Monday 05 January 2026 00:53:12 +0000 (0:00:03.181) 0:01:11.123 ******** 2026-01-05 00:59:16.922315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922330 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.922336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922348 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922376 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.922380 | orchestrator | 2026-01-05 00:59:16.922384 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 00:59:16.922387 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:00.716) 0:01:11.840 ******** 2026-01-05 00:59:16.922391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922405 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.922417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922433 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.922437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922450 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922454 | orchestrator | 2026-01-05 00:59:16.922458 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-05 00:59:16.922462 | orchestrator | Monday 05 January 2026 00:53:15 +0000 (0:00:01.314) 0:01:13.154 ******** 2026-01-05 00:59:16.922466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922627 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922631 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.922635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922678 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.922682 | orchestrator | 2026-01-05 00:59:16.922686 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-05 00:59:16.922690 | orchestrator | Monday 05 January 2026 00:53:16 +0000 (0:00:01.104) 0:01:14.259 ******** 2026-01-05 00:59:16.922694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922706 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.922711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922736 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922752 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.922756 | orchestrator | 2026-01-05 00:59:16.922760 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-05 00:59:16.922764 | orchestrator | Monday 05 January 2026 00:53:16 +0000 (0:00:00.842) 0:01:15.101 ******** 2026-01-05 00:59:16.922768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922844 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.922858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922871 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:59:16.922881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:59:16.922888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:59:16.922892 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.922896 | orchestrator | 2026-01-05 00:59:16.922900 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-05 00:59:16.922904 | orchestrator | Monday 05 January 2026 00:53:17 +0000 (0:00:00.926) 0:01:16.028 ******** 2026-01-05 00:59:16.922908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:59:16.922912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:59:16.922926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:59:16.922930 | orchestrator | 2026-01-05 00:59:16.922934 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-05 00:59:16.922938 | orchestrator | Monday 05 January 2026 00:53:20 +0000 (0:00:02.295) 0:01:18.323 ******** 2026-01-05 00:59:16.922943 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:59:16.922947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:59:16.922952 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:59:16.922956 | orchestrator | 2026-01-05 00:59:16.922960 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-05 00:59:16.922964 | orchestrator | Monday 05 January 2026 00:53:21 +0000 (0:00:01.542) 0:01:19.866 ******** 2026-01-05 00:59:16.922968 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:59:16.922972 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:59:16.922977 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:59:16.922981 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.922985 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:59:16.922989 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.922993 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:59:16.922998 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:59:16.923002 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.923006 | orchestrator | 2026-01-05 00:59:16.923010 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-05 00:59:16.923014 | orchestrator | Monday 05 January 2026 00:53:22 +0000 (0:00:01.170) 0:01:21.036 ******** 2026-01-05 00:59:16.923019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.923047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.923054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:59:16.923075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.923083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.923089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:59:16.923095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.923106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.923113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:59:16.923117 | orchestrator | 2026-01-05 00:59:16.923121 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-05 00:59:16.923125 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:03.081) 0:01:24.118 ******** 2026-01-05 00:59:16.923129 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.923133 | orchestrator | 2026-01-05 00:59:16.923136 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-05 00:59:16.923140 | orchestrator | Monday 05 January 2026 00:53:26 +0000 (0:00:00.690) 0:01:24.808 ******** 2026-01-05 00:59:16.923145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:59:16.923161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:59:16.923285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.923294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.923380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:59:16.923436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.923447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923464 | orchestrator | 2026-01-05 00:59:16.923470 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-05 00:59:16.923477 | orchestrator | Monday 05 January 2026 00:53:32 +0000 (0:00:05.361) 0:01:30.170 ******** 2026-01-05 00:59:16.923483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:59:16.923541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.923551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923569 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.923577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:59:16.923586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.923593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923606 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.923627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:59:16.923692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.923699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.923715 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.923722 | orchestrator | 2026-01-05 00:59:16.923729 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-05 00:59:16.923735 | orchestrator | Monday 05 January 2026 00:53:33 +0000 (0:00:01.514) 0:01:31.684 ******** 2026-01-05 00:59:16.923743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:59:16.923751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:59:16.923758 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.923765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:59:16.923771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:59:16.923777 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.923784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:59:16.923790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:59:16.923796 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.923803 | orchestrator | 2026-01-05 00:59:16.923827 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-05 00:59:16.923836 | orchestrator | Monday 05 January 2026 00:53:34 +0000 (0:00:01.145) 0:01:32.829 ******** 2026-01-05 00:59:16.923842 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.923850 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.923862 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.923868 | orchestrator | 2026-01-05 00:59:16.924086 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-05 00:59:16.924095 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:01.595) 0:01:34.425 ******** 2026-01-05 00:59:16.924101 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.924106 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.924113 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.924118 | orchestrator | 2026-01-05 00:59:16.924125 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-05 00:59:16.924131 | orchestrator | Monday 05 January 2026 00:53:38 +0000 (0:00:02.545) 0:01:36.971 ******** 2026-01-05 00:59:16.924220 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.924227 | orchestrator | 2026-01-05 00:59:16.924233 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-05 00:59:16.924239 | orchestrator | Monday 05 January 2026 00:53:39 +0000 (0:00:00.994) 0:01:37.965 ******** 2026-01-05 00:59:16.924247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.924254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.924375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.924405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924419 | orchestrator | 2026-01-05 00:59:16.924426 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-05 00:59:16.924433 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:06.654) 0:01:44.620 ******** 2026-01-05 00:59:16.924476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.924491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924504 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.924511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.924523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924828 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.924913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.924922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.924931 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.924935 | orchestrator | 2026-01-05 00:59:16.924939 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-05 00:59:16.924943 | orchestrator | Monday 05 January 2026 00:53:47 +0000 (0:00:00.822) 0:01:45.443 ******** 2026-01-05 00:59:16.924948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:59:16.924953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:59:16.924957 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.925002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:59:16.925009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:59:16.925021 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.925059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:59:16.925068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:59:16.925074 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.925080 | orchestrator | 2026-01-05 00:59:16.925087 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-05 00:59:16.925093 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:01.358) 0:01:46.801 ******** 2026-01-05 00:59:16.925099 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.925106 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.925112 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.925119 | orchestrator | 2026-01-05 00:59:16.925125 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-05 00:59:16.925131 | orchestrator | Monday 05 January 2026 00:53:50 +0000 (0:00:01.581) 0:01:48.382 ******** 2026-01-05 00:59:16.925138 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.925142 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.925146 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.925150 | orchestrator | 2026-01-05 00:59:16.925197 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-05 00:59:16.925207 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:02.274) 0:01:50.657 ******** 2026-01-05 00:59:16.925213 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.925220 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.925226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.925232 | orchestrator | 2026-01-05 00:59:16.925238 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-05 00:59:16.925245 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.375) 0:01:51.032 ******** 2026-01-05 00:59:16.925289 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.925296 | orchestrator | 2026-01-05 00:59:16.925303 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-05 00:59:16.925309 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:01.083) 0:01:52.116 ******** 2026-01-05 00:59:16.925316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:59:16.925324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:59:16.925342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:59:16.925790 | orchestrator | 2026-01-05 00:59:16.925798 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-05 00:59:16.925803 | orchestrator | Monday 05 January 2026 00:53:57 +0000 (0:00:03.696) 0:01:55.813 ******** 2026-01-05 00:59:16.925882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:59:16.925895 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.925899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:59:16.925903 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.925907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:59:16.925918 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.925922 | orchestrator | 2026-01-05 00:59:16.925926 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-05 00:59:16.925930 | orchestrator | Monday 05 January 2026 00:54:00 +0000 (0:00:02.406) 0:01:58.219 ******** 2026-01-05 00:59:16.925934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:59:16.925942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:59:16.925947 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.925952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:59:16.925956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:59:16.926170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:59:16.926266 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.926279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:59:16.926287 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.926293 | orchestrator | 2026-01-05 00:59:16.926298 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-05 00:59:16.926304 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:02.751) 0:02:00.971 ******** 2026-01-05 00:59:16.926309 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.926315 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.926321 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.926326 | orchestrator | 2026-01-05 00:59:16.926332 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-05 00:59:16.926338 | orchestrator | Monday 05 January 2026 00:54:03 +0000 (0:00:01.144) 0:02:02.115 ******** 2026-01-05 00:59:16.926345 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.926350 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.926362 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.926368 | orchestrator | 2026-01-05 00:59:16.926374 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-05 00:59:16.926381 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:02.197) 0:02:04.313 ******** 2026-01-05 00:59:16.926387 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.926392 | orchestrator | 2026-01-05 00:59:16.926398 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-05 00:59:16.926405 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:00.713) 0:02:05.026 ******** 2026-01-05 00:59:16.926412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.926425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.926569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.926648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926674 | orchestrator | 2026-01-05 00:59:16.926681 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-05 00:59:16.926688 | orchestrator | Monday 05 January 2026 00:54:13 +0000 (0:00:06.763) 0:02:11.790 ******** 2026-01-05 00:59:16.926702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.926709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926945 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.926962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.926974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.926981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.926988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927118 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.927212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927224 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.927231 | orchestrator | 2026-01-05 00:59:16.927237 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-05 00:59:16.927244 | orchestrator | Monday 05 January 2026 00:54:15 +0000 (0:00:01.827) 0:02:13.617 ******** 2026-01-05 00:59:16.927251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:59:16.927258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:59:16.927302 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.927312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:59:16.927319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:59:16.927330 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.927337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:59:16.927414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:59:16.927423 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.927429 | orchestrator | 2026-01-05 00:59:16.927436 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-05 00:59:16.927442 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:01.261) 0:02:14.879 ******** 2026-01-05 00:59:16.927449 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.927455 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.927461 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.927468 | orchestrator | 2026-01-05 00:59:16.927473 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-05 00:59:16.927477 | orchestrator | Monday 05 January 2026 00:54:18 +0000 (0:00:01.340) 0:02:16.219 ******** 2026-01-05 00:59:16.927481 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.927484 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.927488 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.927492 | orchestrator | 2026-01-05 00:59:16.927496 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-05 00:59:16.927499 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:02.109) 0:02:18.329 ******** 2026-01-05 00:59:16.927503 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.927507 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.927511 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.927514 | orchestrator | 2026-01-05 00:59:16.927518 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-05 00:59:16.927522 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:00.640) 0:02:18.969 ******** 2026-01-05 00:59:16.927526 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.927553 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.927557 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.927561 | orchestrator | 2026-01-05 00:59:16.927564 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-05 00:59:16.927568 | orchestrator | Monday 05 January 2026 00:54:21 +0000 (0:00:00.361) 0:02:19.331 ******** 2026-01-05 00:59:16.927572 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.927576 | orchestrator | 2026-01-05 00:59:16.927580 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-05 00:59:16.927583 | orchestrator | Monday 05 January 2026 00:54:22 +0000 (0:00:01.037) 0:02:20.368 ******** 2026-01-05 00:59:16.927591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:59:16.927596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:59:16.927633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:59:16.927639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:59:16.927648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:59:16.927726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:59:16.927772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.927867 | orchestrator | 2026-01-05 00:59:16.927873 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-05 00:59:16.927880 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:05.909) 0:02:26.278 ******** 2026-01-05 00:59:16.927911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:59:16.927960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:59:16.927971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:59:16.927978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:59:16.927989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:59:16.928013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:59:16.928092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928799 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.928806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.928814 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.929293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.929313 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.929319 | orchestrator | 2026-01-05 00:59:16.929325 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-05 00:59:16.929333 | orchestrator | Monday 05 January 2026 00:54:28 +0000 (0:00:00.679) 0:02:26.958 ******** 2026-01-05 00:59:16.929340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:59:16.929348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:59:16.929355 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.929362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:59:16.929370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:59:16.929382 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.929389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:59:16.929395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:59:16.929401 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.929408 | orchestrator | 2026-01-05 00:59:16.929414 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-05 00:59:16.929420 | orchestrator | Monday 05 January 2026 00:54:30 +0000 (0:00:01.229) 0:02:28.187 ******** 2026-01-05 00:59:16.929427 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.929477 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.929484 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.929491 | orchestrator | 2026-01-05 00:59:16.929497 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-05 00:59:16.929503 | orchestrator | Monday 05 January 2026 00:54:32 +0000 (0:00:02.181) 0:02:30.368 ******** 2026-01-05 00:59:16.929509 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.929515 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.929522 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.929528 | orchestrator | 2026-01-05 00:59:16.929535 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-05 00:59:16.929541 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:01.866) 0:02:32.235 ******** 2026-01-05 00:59:16.929547 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.930185 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.930198 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.930207 | orchestrator | 2026-01-05 00:59:16.930226 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-05 00:59:16.930234 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:00.600) 0:02:32.835 ******** 2026-01-05 00:59:16.930241 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.930248 | orchestrator | 2026-01-05 00:59:16.930255 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-05 00:59:16.930262 | orchestrator | Monday 05 January 2026 00:54:35 +0000 (0:00:00.921) 0:02:33.757 ******** 2026-01-05 00:59:16.930283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:59:16.930305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.930318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:59:16.930334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.930351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:59:16.930365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.930377 | orchestrator | 2026-01-05 00:59:16.930385 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-05 00:59:16.930393 | orchestrator | Monday 05 January 2026 00:54:40 +0000 (0:00:04.589) 0:02:38.347 ******** 2026-01-05 00:59:16.930404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:59:16.930417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.930429 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.930440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:59:16.930462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.930475 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.930484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:59:16.930595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.930614 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.930622 | orchestrator | 2026-01-05 00:59:16.930629 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-05 00:59:16.930636 | orchestrator | Monday 05 January 2026 00:54:43 +0000 (0:00:03.712) 0:02:42.059 ******** 2026-01-05 00:59:16.930644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:59:16.930652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:59:16.930659 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.930666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:59:16.930674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:59:16.930681 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.930691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:59:16.930699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:59:16.930707 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.930714 | orchestrator | 2026-01-05 00:59:16.930721 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-05 00:59:16.930732 | orchestrator | Monday 05 January 2026 00:54:48 +0000 (0:00:04.405) 0:02:46.464 ******** 2026-01-05 00:59:16.930739 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.930747 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.930754 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.930760 | orchestrator | 2026-01-05 00:59:16.930766 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-05 00:59:16.930773 | orchestrator | Monday 05 January 2026 00:54:49 +0000 (0:00:01.334) 0:02:47.798 ******** 2026-01-05 00:59:16.930780 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.930786 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.930793 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.930800 | orchestrator | 2026-01-05 00:59:16.930848 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-05 00:59:16.930917 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:02.344) 0:02:50.143 ******** 2026-01-05 00:59:16.930927 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.930934 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.930940 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.930947 | orchestrator | 2026-01-05 00:59:16.930953 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-05 00:59:16.930960 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:00.624) 0:02:50.768 ******** 2026-01-05 00:59:16.930968 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.930974 | orchestrator | 2026-01-05 00:59:16.930982 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-05 00:59:16.930989 | orchestrator | Monday 05 January 2026 00:54:53 +0000 (0:00:00.935) 0:02:51.703 ******** 2026-01-05 00:59:16.930997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:59:16.931006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:59:16.931022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:59:16.931043 | orchestrator | 2026-01-05 00:59:16.931049 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-05 00:59:16.931056 | orchestrator | Monday 05 January 2026 00:54:57 +0000 (0:00:03.960) 0:02:55.664 ******** 2026-01-05 00:59:16.931069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:59:16.931127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:59:16.931137 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.931144 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.931151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:59:16.931159 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.931165 | orchestrator | 2026-01-05 00:59:16.931172 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-05 00:59:16.931179 | orchestrator | Monday 05 January 2026 00:54:58 +0000 (0:00:00.883) 0:02:56.547 ******** 2026-01-05 00:59:16.931187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:59:16.931194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:59:16.931202 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.931209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:59:16.931216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:59:16.931223 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.931243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:59:16.931251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:59:16.931271 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.931279 | orchestrator | 2026-01-05 00:59:16.931286 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-05 00:59:16.931292 | orchestrator | Monday 05 January 2026 00:54:59 +0000 (0:00:00.901) 0:02:57.449 ******** 2026-01-05 00:59:16.931300 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.931306 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.931343 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.931352 | orchestrator | 2026-01-05 00:59:16.931365 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-05 00:59:16.931372 | orchestrator | Monday 05 January 2026 00:55:00 +0000 (0:00:01.445) 0:02:58.895 ******** 2026-01-05 00:59:16.931379 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.931387 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.931394 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.931401 | orchestrator | 2026-01-05 00:59:16.931408 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-05 00:59:16.931415 | orchestrator | Monday 05 January 2026 00:55:03 +0000 (0:00:02.602) 0:03:01.497 ******** 2026-01-05 00:59:16.931422 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.931429 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.931436 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.931443 | orchestrator | 2026-01-05 00:59:16.931450 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-05 00:59:16.931456 | orchestrator | Monday 05 January 2026 00:55:03 +0000 (0:00:00.637) 0:03:02.134 ******** 2026-01-05 00:59:16.931464 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.931471 | orchestrator | 2026-01-05 00:59:16.931478 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-05 00:59:16.931485 | orchestrator | Monday 05 January 2026 00:55:05 +0000 (0:00:01.108) 0:03:03.243 ******** 2026-01-05 00:59:16.931554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:16.931576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:16.931629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:59:16.931645 | orchestrator | 2026-01-05 00:59:16.931653 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-05 00:59:16.931660 | orchestrator | Monday 05 January 2026 00:55:09 +0000 (0:00:04.299) 0:03:07.543 ******** 2026-01-05 00:59:16.931710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:16.931719 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.931726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:16.931736 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.931784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:59:16.931793 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.931799 | orchestrator | 2026-01-05 00:59:16.931805 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-05 00:59:16.931810 | orchestrator | Monday 05 January 2026 00:55:10 +0000 (0:00:01.330) 0:03:08.874 ******** 2026-01-05 00:59:16.931817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:59:16.931825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:59:16.931839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:59:16.931846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:59:16.931853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:59:16.931860 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.931867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:59:16.931877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:59:16.931885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:59:16.931892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:59:16.931898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:59:16.931950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:59:16.931958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:59:16.931965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:59:16.931976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:59:16.931983 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.931989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:59:16.931995 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.932001 | orchestrator | 2026-01-05 00:59:16.932007 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-05 00:59:16.932014 | orchestrator | Monday 05 January 2026 00:55:11 +0000 (0:00:01.006) 0:03:09.880 ******** 2026-01-05 00:59:16.932020 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.932063 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.932070 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.932075 | orchestrator | 2026-01-05 00:59:16.932081 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-05 00:59:16.932086 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:01.602) 0:03:11.483 ******** 2026-01-05 00:59:16.932091 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.932097 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.932102 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.932107 | orchestrator | 2026-01-05 00:59:16.932113 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-05 00:59:16.932119 | orchestrator | Monday 05 January 2026 00:55:15 +0000 (0:00:02.241) 0:03:13.725 ******** 2026-01-05 00:59:16.932125 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.932131 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.932136 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.932141 | orchestrator | 2026-01-05 00:59:16.932147 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-05 00:59:16.932152 | orchestrator | Monday 05 January 2026 00:55:16 +0000 (0:00:00.454) 0:03:14.179 ******** 2026-01-05 00:59:16.932158 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.932163 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.932168 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.932174 | orchestrator | 2026-01-05 00:59:16.932179 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-05 00:59:16.932184 | orchestrator | Monday 05 January 2026 00:55:16 +0000 (0:00:00.614) 0:03:14.794 ******** 2026-01-05 00:59:16.932190 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.932195 | orchestrator | 2026-01-05 00:59:16.932200 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-05 00:59:16.932206 | orchestrator | Monday 05 January 2026 00:55:17 +0000 (0:00:01.027) 0:03:15.822 ******** 2026-01-05 00:59:16.932217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:59:16.932284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:59:16.932299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:59:16.932306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:59:16.932312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:59:16.932324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:59:16.932331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:59:16.932416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:59:16.932426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:59:16.932432 | orchestrator | 2026-01-05 00:59:16.932438 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-05 00:59:16.932443 | orchestrator | Monday 05 January 2026 00:55:21 +0000 (0:00:03.883) 0:03:19.705 ******** 2026-01-05 00:59:16.932450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:59:16.932460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:59:16.932466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:59:16.932479 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.932523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:59:16.932533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:59:16.932540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:59:16.932546 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.932557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:59:16.932564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:59:16.932575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:59:16.932581 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.932588 | orchestrator | 2026-01-05 00:59:16.932594 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-05 00:59:16.932641 | orchestrator | Monday 05 January 2026 00:55:22 +0000 (0:00:01.005) 0:03:20.711 ******** 2026-01-05 00:59:16.932653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:59:16.932662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:59:16.932669 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.932675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:59:16.932681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:59:16.932687 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.932693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:59:16.932699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:59:16.932705 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.932711 | orchestrator | 2026-01-05 00:59:16.932717 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-05 00:59:16.932725 | orchestrator | Monday 05 January 2026 00:55:23 +0000 (0:00:00.866) 0:03:21.578 ******** 2026-01-05 00:59:16.932731 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.932738 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.932743 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.932749 | orchestrator | 2026-01-05 00:59:16.932755 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-05 00:59:16.932761 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:01.423) 0:03:23.002 ******** 2026-01-05 00:59:16.932767 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.932772 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.932777 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.932787 | orchestrator | 2026-01-05 00:59:16.932793 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-05 00:59:16.932800 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:02.213) 0:03:25.215 ******** 2026-01-05 00:59:16.932805 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.932811 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.932818 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.932824 | orchestrator | 2026-01-05 00:59:16.932830 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-05 00:59:16.932836 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:00.610) 0:03:25.826 ******** 2026-01-05 00:59:16.932842 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.932848 | orchestrator | 2026-01-05 00:59:16.932855 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-05 00:59:16.932861 | orchestrator | Monday 05 January 2026 00:55:28 +0000 (0:00:01.131) 0:03:26.958 ******** 2026-01-05 00:59:16.932868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:59:16.932923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.932953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:59:16.932962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.932977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:59:16.932984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.932991 | orchestrator | 2026-01-05 00:59:16.932998 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-05 00:59:16.933005 | orchestrator | Monday 05 January 2026 00:55:32 +0000 (0:00:04.176) 0:03:31.134 ******** 2026-01-05 00:59:16.933098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:59:16.933112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933119 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.933132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:59:16.933142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933149 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.933198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:59:16.933208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933216 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.933223 | orchestrator | 2026-01-05 00:59:16.933230 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-05 00:59:16.933237 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:01.060) 0:03:32.195 ******** 2026-01-05 00:59:16.933244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:59:16.933284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:59:16.933316 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.933323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:59:16.933330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:59:16.933337 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.933345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:59:16.933352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:59:16.933359 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.933365 | orchestrator | 2026-01-05 00:59:16.933372 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-05 00:59:16.933378 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:00.905) 0:03:33.100 ******** 2026-01-05 00:59:16.933385 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.933392 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.933401 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.933408 | orchestrator | 2026-01-05 00:59:16.933414 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-05 00:59:16.933421 | orchestrator | Monday 05 January 2026 00:55:36 +0000 (0:00:01.597) 0:03:34.698 ******** 2026-01-05 00:59:16.933428 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.933434 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.933441 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.933447 | orchestrator | 2026-01-05 00:59:16.933453 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-05 00:59:16.933459 | orchestrator | Monday 05 January 2026 00:55:38 +0000 (0:00:02.183) 0:03:36.882 ******** 2026-01-05 00:59:16.933465 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.933471 | orchestrator | 2026-01-05 00:59:16.933477 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-05 00:59:16.933484 | orchestrator | Monday 05 January 2026 00:55:40 +0000 (0:00:01.350) 0:03:38.232 ******** 2026-01-05 00:59:16.933490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:59:16.933556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:59:16.933602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:59:16.933716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933741 | orchestrator | 2026-01-05 00:59:16.933748 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-05 00:59:16.933755 | orchestrator | Monday 05 January 2026 00:55:43 +0000 (0:00:03.867) 0:03:42.099 ******** 2026-01-05 00:59:16.933805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:59:16.933821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933841 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.933851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:59:16.933859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933926 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.933933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:59:16.933939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.933961 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.933967 | orchestrator | 2026-01-05 00:59:16.933973 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-05 00:59:16.933979 | orchestrator | Monday 05 January 2026 00:55:44 +0000 (0:00:00.718) 0:03:42.818 ******** 2026-01-05 00:59:16.933985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:59:16.933991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:59:16.934005 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.934011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:59:16.934103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:59:16.934112 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.934118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:59:16.934124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:59:16.934131 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.934137 | orchestrator | 2026-01-05 00:59:16.934144 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-05 00:59:16.934150 | orchestrator | Monday 05 January 2026 00:55:45 +0000 (0:00:01.298) 0:03:44.116 ******** 2026-01-05 00:59:16.934157 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.934164 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.934171 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.934177 | orchestrator | 2026-01-05 00:59:16.934184 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-05 00:59:16.934190 | orchestrator | Monday 05 January 2026 00:55:47 +0000 (0:00:01.398) 0:03:45.514 ******** 2026-01-05 00:59:16.934197 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.934203 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.934209 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.934215 | orchestrator | 2026-01-05 00:59:16.934221 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-05 00:59:16.934228 | orchestrator | Monday 05 January 2026 00:55:49 +0000 (0:00:02.240) 0:03:47.755 ******** 2026-01-05 00:59:16.934234 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.934241 | orchestrator | 2026-01-05 00:59:16.934247 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-05 00:59:16.934254 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:01.448) 0:03:49.203 ******** 2026-01-05 00:59:16.934261 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:59:16.934267 | orchestrator | 2026-01-05 00:59:16.934274 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-05 00:59:16.934281 | orchestrator | Monday 05 January 2026 00:55:54 +0000 (0:00:03.130) 0:03:52.333 ******** 2026-01-05 00:59:16.934293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:16.934355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:16.934366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:59:16.934373 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.934383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:59:16.934394 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.934431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:16.934453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:59:16.934461 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.934467 | orchestrator | 2026-01-05 00:59:16.934473 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-05 00:59:16.934480 | orchestrator | Monday 05 January 2026 00:55:56 +0000 (0:00:02.414) 0:03:54.747 ******** 2026-01-05 00:59:16.934490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:16.934501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:59:16.934508 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.934560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:16.934571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:59:16.934578 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.934588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:59:16.934643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:59:16.934653 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.934660 | orchestrator | 2026-01-05 00:59:16.934667 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-05 00:59:16.934674 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:02.673) 0:03:57.420 ******** 2026-01-05 00:59:16.934681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:59:16.934688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:59:16.934695 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.934702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:59:16.934718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:59:16.934725 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.934732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:59:16.934785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:59:16.934795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.934802 | orchestrator | 2026-01-05 00:59:16.934809 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-05 00:59:16.934816 | orchestrator | Monday 05 January 2026 00:56:02 +0000 (0:00:03.007) 0:04:00.428 ******** 2026-01-05 00:59:16.934832 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.934839 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.934845 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.934851 | orchestrator | 2026-01-05 00:59:16.934858 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-05 00:59:16.934865 | orchestrator | Monday 05 January 2026 00:56:04 +0000 (0:00:01.941) 0:04:02.370 ******** 2026-01-05 00:59:16.934871 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.934877 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.934884 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.934890 | orchestrator | 2026-01-05 00:59:16.934928 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-05 00:59:16.934937 | orchestrator | Monday 05 January 2026 00:56:05 +0000 (0:00:01.567) 0:04:03.938 ******** 2026-01-05 00:59:16.934943 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.934950 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.934957 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.934963 | orchestrator | 2026-01-05 00:59:16.934970 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-05 00:59:16.934977 | orchestrator | Monday 05 January 2026 00:56:06 +0000 (0:00:00.268) 0:04:04.206 ******** 2026-01-05 00:59:16.934984 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.934997 | orchestrator | 2026-01-05 00:59:16.935004 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-05 00:59:16.935011 | orchestrator | Monday 05 January 2026 00:56:07 +0000 (0:00:01.204) 0:04:05.410 ******** 2026-01-05 00:59:16.935018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:59:16.935042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:59:16.935050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:59:16.935057 | orchestrator | 2026-01-05 00:59:16.935063 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-05 00:59:16.935070 | orchestrator | Monday 05 January 2026 00:56:08 +0000 (0:00:01.494) 0:04:06.904 ******** 2026-01-05 00:59:16.935134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:59:16.935144 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.935151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:59:16.935165 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.935172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:59:16.935178 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.935185 | orchestrator | 2026-01-05 00:59:16.935192 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-05 00:59:16.935199 | orchestrator | Monday 05 January 2026 00:56:09 +0000 (0:00:00.446) 0:04:07.350 ******** 2026-01-05 00:59:16.935250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:59:16.935265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:59:16.935273 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.935297 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.935304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:59:16.935311 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.935317 | orchestrator | 2026-01-05 00:59:16.935323 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-05 00:59:16.935330 | orchestrator | Monday 05 January 2026 00:56:10 +0000 (0:00:00.969) 0:04:08.320 ******** 2026-01-05 00:59:16.935336 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.935343 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.935349 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.935356 | orchestrator | 2026-01-05 00:59:16.935362 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-05 00:59:16.935369 | orchestrator | Monday 05 January 2026 00:56:10 +0000 (0:00:00.509) 0:04:08.829 ******** 2026-01-05 00:59:16.935376 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.935382 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.935389 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.935396 | orchestrator | 2026-01-05 00:59:16.935402 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-05 00:59:16.935409 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:01.434) 0:04:10.264 ******** 2026-01-05 00:59:16.935415 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.935421 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.935428 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.935441 | orchestrator | 2026-01-05 00:59:16.935447 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-05 00:59:16.935513 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:00.333) 0:04:10.597 ******** 2026-01-05 00:59:16.935523 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.935530 | orchestrator | 2026-01-05 00:59:16.935536 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-05 00:59:16.935542 | orchestrator | Monday 05 January 2026 00:56:14 +0000 (0:00:01.561) 0:04:12.159 ******** 2026-01-05 00:59:16.935577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:59:16.935585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:59:16.935680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.935696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.935703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.935721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:59:16.935790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.935797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:59:16.935815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.935824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.935879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:59:16.935915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.935980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.935989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.935997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:59:16.936112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.936122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:59:16.936227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.936399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936407 | orchestrator | 2026-01-05 00:59:16.936413 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-05 00:59:16.936419 | orchestrator | Monday 05 January 2026 00:56:18 +0000 (0:00:04.596) 0:04:16.755 ******** 2026-01-05 00:59:16.936425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:59:16.936432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:59:16.936506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:59:16.936612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:59:16.936641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.936716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:59:16.936847 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.936854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:59:16.936882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.936962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.936972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.936979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.937055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:59:16.937066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.937083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:59:16.937089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:59:16.937121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.937132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.937147 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.937154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:59:16.937177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:59:16.937184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.937191 | orchestrator | 2026-01-05 00:59:16.937197 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-05 00:59:16.937204 | orchestrator | Monday 05 January 2026 00:56:19 +0000 (0:00:01.372) 0:04:18.127 ******** 2026-01-05 00:59:16.937210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:59:16.937217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:59:16.937229 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.937235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:59:16.937241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:59:16.937247 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.937253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:59:16.937259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:59:16.937264 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.937273 | orchestrator | 2026-01-05 00:59:16.937278 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-05 00:59:16.937284 | orchestrator | Monday 05 January 2026 00:56:21 +0000 (0:00:01.953) 0:04:20.081 ******** 2026-01-05 00:59:16.937290 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.937296 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.937302 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.937308 | orchestrator | 2026-01-05 00:59:16.937315 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-05 00:59:16.937321 | orchestrator | Monday 05 January 2026 00:56:23 +0000 (0:00:01.428) 0:04:21.510 ******** 2026-01-05 00:59:16.937327 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.937333 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.937340 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.937346 | orchestrator | 2026-01-05 00:59:16.937352 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-05 00:59:16.937358 | orchestrator | Monday 05 January 2026 00:56:25 +0000 (0:00:02.211) 0:04:23.721 ******** 2026-01-05 00:59:16.937364 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.937370 | orchestrator | 2026-01-05 00:59:16.937376 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-05 00:59:16.937386 | orchestrator | Monday 05 January 2026 00:56:26 +0000 (0:00:01.271) 0:04:24.993 ******** 2026-01-05 00:59:16.937393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.937423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.937439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.937448 | orchestrator | 2026-01-05 00:59:16.937456 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-05 00:59:16.937464 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:04.005) 0:04:28.999 ******** 2026-01-05 00:59:16.937470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.937476 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.937485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.937491 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.937512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.937523 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.937529 | orchestrator | 2026-01-05 00:59:16.937534 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-05 00:59:16.937540 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:00.582) 0:04:29.582 ******** 2026-01-05 00:59:16.937545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:59:16.937551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:59:16.937558 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.937564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:59:16.937569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:59:16.937575 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.937581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:59:16.937587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:59:16.937593 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.937599 | orchestrator | 2026-01-05 00:59:16.937605 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-05 00:59:16.937611 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:00.797) 0:04:30.379 ******** 2026-01-05 00:59:16.937617 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.937622 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.937628 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.937633 | orchestrator | 2026-01-05 00:59:16.937639 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-05 00:59:16.937645 | orchestrator | Monday 05 January 2026 00:56:34 +0000 (0:00:01.946) 0:04:32.326 ******** 2026-01-05 00:59:16.937651 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.937658 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.937664 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.937671 | orchestrator | 2026-01-05 00:59:16.937678 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-05 00:59:16.937685 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:01.827) 0:04:34.153 ******** 2026-01-05 00:59:16.937692 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.937699 | orchestrator | 2026-01-05 00:59:16.937712 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-05 00:59:16.937719 | orchestrator | Monday 05 January 2026 00:56:37 +0000 (0:00:01.648) 0:04:35.802 ******** 2026-01-05 00:59:16.937728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.937763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.937772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.937783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937847 | orchestrator | 2026-01-05 00:59:16.937853 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-05 00:59:16.937860 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:04.483) 0:04:40.285 ******** 2026-01-05 00:59:16.937869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.937881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937915 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.937922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.937929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937948 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.937955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.937976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.937989 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.937995 | orchestrator | 2026-01-05 00:59:16.938001 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-05 00:59:16.938007 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:01.461) 0:04:41.747 ******** 2026-01-05 00:59:16.938071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938111 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.938117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938147 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.938154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:59:16.938181 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.938187 | orchestrator | 2026-01-05 00:59:16.938219 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-05 00:59:16.938226 | orchestrator | Monday 05 January 2026 00:56:44 +0000 (0:00:01.006) 0:04:42.753 ******** 2026-01-05 00:59:16.938233 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.938239 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.938245 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.938252 | orchestrator | 2026-01-05 00:59:16.938258 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-05 00:59:16.938264 | orchestrator | Monday 05 January 2026 00:56:45 +0000 (0:00:01.381) 0:04:44.135 ******** 2026-01-05 00:59:16.938270 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.938277 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.938283 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.938289 | orchestrator | 2026-01-05 00:59:16.938295 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-05 00:59:16.938302 | orchestrator | Monday 05 January 2026 00:56:48 +0000 (0:00:02.188) 0:04:46.324 ******** 2026-01-05 00:59:16.938308 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.938314 | orchestrator | 2026-01-05 00:59:16.938320 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-05 00:59:16.938327 | orchestrator | Monday 05 January 2026 00:56:49 +0000 (0:00:01.664) 0:04:47.988 ******** 2026-01-05 00:59:16.938333 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-05 00:59:16.938346 | orchestrator | 2026-01-05 00:59:16.938352 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-05 00:59:16.938358 | orchestrator | Monday 05 January 2026 00:56:50 +0000 (0:00:00.887) 0:04:48.876 ******** 2026-01-05 00:59:16.938365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:59:16.938372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:59:16.938383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:59:16.938389 | orchestrator | 2026-01-05 00:59:16.938395 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-05 00:59:16.938402 | orchestrator | Monday 05 January 2026 00:56:55 +0000 (0:00:04.847) 0:04:53.724 ******** 2026-01-05 00:59:16.938408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938415 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.938421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938427 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.938453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938460 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.938466 | orchestrator | 2026-01-05 00:59:16.938471 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-05 00:59:16.938482 | orchestrator | Monday 05 January 2026 00:56:56 +0000 (0:00:01.236) 0:04:54.961 ******** 2026-01-05 00:59:16.938487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:59:16.938493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:59:16.938499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.938506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:59:16.938512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:59:16.938518 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.938524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:59:16.938531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:59:16.938537 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.938544 | orchestrator | 2026-01-05 00:59:16.938551 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:59:16.938558 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:01.698) 0:04:56.659 ******** 2026-01-05 00:59:16.938564 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.938571 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.938615 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.938623 | orchestrator | 2026-01-05 00:59:16.938633 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:59:16.938640 | orchestrator | Monday 05 January 2026 00:57:01 +0000 (0:00:02.719) 0:04:59.379 ******** 2026-01-05 00:59:16.938647 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.938652 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.938659 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.938666 | orchestrator | 2026-01-05 00:59:16.938672 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-05 00:59:16.938678 | orchestrator | Monday 05 January 2026 00:57:04 +0000 (0:00:03.281) 0:05:02.660 ******** 2026-01-05 00:59:16.938685 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-05 00:59:16.938692 | orchestrator | 2026-01-05 00:59:16.938698 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-05 00:59:16.938704 | orchestrator | Monday 05 January 2026 00:57:06 +0000 (0:00:01.500) 0:05:04.161 ******** 2026-01-05 00:59:16.938711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938742 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.938775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938783 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.938790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938798 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.938804 | orchestrator | 2026-01-05 00:59:16.938811 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-05 00:59:16.938818 | orchestrator | Monday 05 January 2026 00:57:07 +0000 (0:00:01.364) 0:05:05.525 ******** 2026-01-05 00:59:16.938824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938832 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.938839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938846 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.938860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:59:16.938867 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.938874 | orchestrator | 2026-01-05 00:59:16.938880 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-05 00:59:16.938887 | orchestrator | Monday 05 January 2026 00:57:08 +0000 (0:00:01.398) 0:05:06.924 ******** 2026-01-05 00:59:16.938893 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.938900 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.938906 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.938913 | orchestrator | 2026-01-05 00:59:16.938920 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:59:16.938932 | orchestrator | Monday 05 January 2026 00:57:10 +0000 (0:00:02.035) 0:05:08.959 ******** 2026-01-05 00:59:16.938939 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.938946 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.938953 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.938959 | orchestrator | 2026-01-05 00:59:16.938966 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:59:16.938973 | orchestrator | Monday 05 January 2026 00:57:13 +0000 (0:00:02.551) 0:05:11.511 ******** 2026-01-05 00:59:16.938980 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.938987 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.938993 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.939000 | orchestrator | 2026-01-05 00:59:16.939007 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-05 00:59:16.939014 | orchestrator | Monday 05 January 2026 00:57:16 +0000 (0:00:03.401) 0:05:14.912 ******** 2026-01-05 00:59:16.939021 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-05 00:59:16.939040 | orchestrator | 2026-01-05 00:59:16.939047 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-05 00:59:16.939054 | orchestrator | Monday 05 January 2026 00:57:17 +0000 (0:00:01.105) 0:05:16.018 ******** 2026-01-05 00:59:16.939082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:59:16.939091 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.939098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:59:16.939105 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.939111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:59:16.939117 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.939123 | orchestrator | 2026-01-05 00:59:16.939130 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-05 00:59:16.939137 | orchestrator | Monday 05 January 2026 00:57:19 +0000 (0:00:01.702) 0:05:17.721 ******** 2026-01-05 00:59:16.939144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:59:16.939157 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.939167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:59:16.939174 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.939181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:59:16.939189 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.939196 | orchestrator | 2026-01-05 00:59:16.939202 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-05 00:59:16.939209 | orchestrator | Monday 05 January 2026 00:57:21 +0000 (0:00:01.576) 0:05:19.297 ******** 2026-01-05 00:59:16.939216 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.939223 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.939231 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.939238 | orchestrator | 2026-01-05 00:59:16.939245 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:59:16.939252 | orchestrator | Monday 05 January 2026 00:57:23 +0000 (0:00:02.017) 0:05:21.314 ******** 2026-01-05 00:59:16.939259 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.939287 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.939294 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.939302 | orchestrator | 2026-01-05 00:59:16.939308 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:59:16.939315 | orchestrator | Monday 05 January 2026 00:57:25 +0000 (0:00:02.707) 0:05:24.022 ******** 2026-01-05 00:59:16.939321 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.939328 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.939335 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.939342 | orchestrator | 2026-01-05 00:59:16.939347 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-05 00:59:16.939353 | orchestrator | Monday 05 January 2026 00:57:29 +0000 (0:00:03.654) 0:05:27.677 ******** 2026-01-05 00:59:16.939358 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.939364 | orchestrator | 2026-01-05 00:59:16.939371 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-05 00:59:16.939378 | orchestrator | Monday 05 January 2026 00:57:31 +0000 (0:00:01.941) 0:05:29.618 ******** 2026-01-05 00:59:16.939385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.939398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:59:16.939409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.939451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.939460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:59:16.939508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.939534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.939560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:59:16.939569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.939596 | orchestrator | 2026-01-05 00:59:16.939603 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-05 00:59:16.939611 | orchestrator | Monday 05 January 2026 00:57:35 +0000 (0:00:03.681) 0:05:33.300 ******** 2026-01-05 00:59:16.939621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.939628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:59:16.939652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.939681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.939691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:59:16.939699 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.939707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.939752 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.939759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.939767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:59:16.939777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:59:16.939808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:59:16.939816 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.939823 | orchestrator | 2026-01-05 00:59:16.939830 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-05 00:59:16.939837 | orchestrator | Monday 05 January 2026 00:57:35 +0000 (0:00:00.791) 0:05:34.091 ******** 2026-01-05 00:59:16.939844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:59:16.939856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:59:16.939864 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.939871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:59:16.939878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:59:16.939885 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.939892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:59:16.939899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:59:16.939906 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.939912 | orchestrator | 2026-01-05 00:59:16.939919 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-05 00:59:16.939926 | orchestrator | Monday 05 January 2026 00:57:37 +0000 (0:00:01.765) 0:05:35.857 ******** 2026-01-05 00:59:16.939933 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.939940 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.939947 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.939954 | orchestrator | 2026-01-05 00:59:16.939961 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-05 00:59:16.939968 | orchestrator | Monday 05 January 2026 00:57:39 +0000 (0:00:01.554) 0:05:37.412 ******** 2026-01-05 00:59:16.939975 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.939982 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.939989 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.939994 | orchestrator | 2026-01-05 00:59:16.940000 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-05 00:59:16.940005 | orchestrator | Monday 05 January 2026 00:57:41 +0000 (0:00:02.041) 0:05:39.453 ******** 2026-01-05 00:59:16.940010 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.940016 | orchestrator | 2026-01-05 00:59:16.940021 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-05 00:59:16.940070 | orchestrator | Monday 05 January 2026 00:57:42 +0000 (0:00:01.296) 0:05:40.750 ******** 2026-01-05 00:59:16.940079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:16.940109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:16.940121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:16.940128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:59:16.940139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:59:16.940200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:59:16.940215 | orchestrator | 2026-01-05 00:59:16.940223 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-05 00:59:16.940230 | orchestrator | Monday 05 January 2026 00:57:48 +0000 (0:00:05.942) 0:05:46.693 ******** 2026-01-05 00:59:16.940237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:59:16.940245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:59:16.940252 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.940263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:59:16.940271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:59:16.940299 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.940308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:59:16.940316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:59:16.940323 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.940331 | orchestrator | 2026-01-05 00:59:16.940338 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-05 00:59:16.940345 | orchestrator | Monday 05 January 2026 00:57:49 +0000 (0:00:00.750) 0:05:47.444 ******** 2026-01-05 00:59:16.940352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:59:16.940360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:59:16.940371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:59:16.940379 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.940386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:59:16.940401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:59:16.940408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:59:16.940415 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.940422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:59:16.940430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:59:16.940455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:59:16.940464 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.940471 | orchestrator | 2026-01-05 00:59:16.940478 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-05 00:59:16.940485 | orchestrator | Monday 05 January 2026 00:57:50 +0000 (0:00:00.987) 0:05:48.432 ******** 2026-01-05 00:59:16.940492 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.940499 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.940506 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.940513 | orchestrator | 2026-01-05 00:59:16.940520 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-05 00:59:16.940528 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:00.917) 0:05:49.349 ******** 2026-01-05 00:59:16.940535 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.940542 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.940549 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.940555 | orchestrator | 2026-01-05 00:59:16.940563 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-05 00:59:16.940569 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:01.464) 0:05:50.813 ******** 2026-01-05 00:59:16.940576 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.940583 | orchestrator | 2026-01-05 00:59:16.940590 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-05 00:59:16.940597 | orchestrator | Monday 05 January 2026 00:57:54 +0000 (0:00:01.554) 0:05:52.368 ******** 2026-01-05 00:59:16.940604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:59:16.940612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:59:16.940626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.940667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:59:16.940676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:59:16.940683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:59:16.940716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:59:16.940727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.940786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.940793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:59:16.940809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:59:16.940817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.940843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:59:16.940860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:59:16.940868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:59:16.940881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:59:16.940889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.940923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.940931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.940938 | orchestrator | 2026-01-05 00:59:16.940945 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-05 00:59:16.940952 | orchestrator | Monday 05 January 2026 00:57:59 +0000 (0:00:05.176) 0:05:57.544 ******** 2026-01-05 00:59:16.940984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:59:16.940995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:59:16.941013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.941055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:59:16.941062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:59:16.941073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.941108 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:59:16.941136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:59:16.941145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:59:16.941170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.941182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:59:16.941190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:59:16.941201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:59:16.941227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.941238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:59:16.941256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:59:16.941273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.941280 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:59:16.941307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:59:16.941314 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941321 | orchestrator | 2026-01-05 00:59:16.941328 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-05 00:59:16.941335 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:01.582) 0:05:59.127 ******** 2026-01-05 00:59:16.941342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:59:16.941350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:59:16.941358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:59:16.941368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:59:16.941376 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:59:16.941390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:59:16.941397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:59:16.941405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:59:16.941416 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:59:16.941435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:59:16.941442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:59:16.941450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:59:16.941457 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941464 | orchestrator | 2026-01-05 00:59:16.941471 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-05 00:59:16.941479 | orchestrator | Monday 05 January 2026 00:58:02 +0000 (0:00:01.264) 0:06:00.392 ******** 2026-01-05 00:59:16.941486 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941493 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941500 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941507 | orchestrator | 2026-01-05 00:59:16.941514 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-05 00:59:16.941521 | orchestrator | Monday 05 January 2026 00:58:02 +0000 (0:00:00.495) 0:06:00.888 ******** 2026-01-05 00:59:16.941528 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941535 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941541 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941546 | orchestrator | 2026-01-05 00:59:16.941552 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-05 00:59:16.941557 | orchestrator | Monday 05 January 2026 00:58:04 +0000 (0:00:01.731) 0:06:02.619 ******** 2026-01-05 00:59:16.941563 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.941569 | orchestrator | 2026-01-05 00:59:16.941575 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-05 00:59:16.941580 | orchestrator | Monday 05 January 2026 00:58:06 +0000 (0:00:02.193) 0:06:04.813 ******** 2026-01-05 00:59:16.941589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:59:16.941596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:59:16.941612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:59:16.941619 | orchestrator | 2026-01-05 00:59:16.941625 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-05 00:59:16.941632 | orchestrator | Monday 05 January 2026 00:58:09 +0000 (0:00:02.774) 0:06:07.588 ******** 2026-01-05 00:59:16.941640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:59:16.941650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:59:16.941658 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941665 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:59:16.941684 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941691 | orchestrator | 2026-01-05 00:59:16.941698 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-05 00:59:16.941708 | orchestrator | Monday 05 January 2026 00:58:09 +0000 (0:00:00.445) 0:06:08.034 ******** 2026-01-05 00:59:16.941716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:59:16.941723 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:59:16.941738 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:59:16.941752 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941759 | orchestrator | 2026-01-05 00:59:16.941766 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-05 00:59:16.941773 | orchestrator | Monday 05 January 2026 00:58:11 +0000 (0:00:01.313) 0:06:09.347 ******** 2026-01-05 00:59:16.941779 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941786 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941793 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941800 | orchestrator | 2026-01-05 00:59:16.941807 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-05 00:59:16.941814 | orchestrator | Monday 05 January 2026 00:58:11 +0000 (0:00:00.508) 0:06:09.856 ******** 2026-01-05 00:59:16.941820 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.941827 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.941834 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.941841 | orchestrator | 2026-01-05 00:59:16.941848 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-05 00:59:16.941854 | orchestrator | Monday 05 January 2026 00:58:13 +0000 (0:00:01.706) 0:06:11.562 ******** 2026-01-05 00:59:16.941862 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:59:16.941869 | orchestrator | 2026-01-05 00:59:16.941876 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-05 00:59:16.941883 | orchestrator | Monday 05 January 2026 00:58:15 +0000 (0:00:02.069) 0:06:13.632 ******** 2026-01-05 00:59:16.941892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.941908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.941919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.941927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.941935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.941949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:59:16.941956 | orchestrator | 2026-01-05 00:59:16.941963 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-05 00:59:16.941970 | orchestrator | Monday 05 January 2026 00:58:22 +0000 (0:00:06.971) 0:06:20.604 ******** 2026-01-05 00:59:16.941981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.941988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.941996 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.942101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.942114 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.942132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:59:16.942138 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942144 | orchestrator | 2026-01-05 00:59:16.942151 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-05 00:59:16.942158 | orchestrator | Monday 05 January 2026 00:58:23 +0000 (0:00:00.656) 0:06:21.261 ******** 2026-01-05 00:59:16.942165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942199 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942237 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:59:16.942272 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942279 | orchestrator | 2026-01-05 00:59:16.942286 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-05 00:59:16.942296 | orchestrator | Monday 05 January 2026 00:58:25 +0000 (0:00:02.100) 0:06:23.362 ******** 2026-01-05 00:59:16.942303 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.942309 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.942317 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.942324 | orchestrator | 2026-01-05 00:59:16.942331 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-05 00:59:16.942341 | orchestrator | Monday 05 January 2026 00:58:26 +0000 (0:00:01.456) 0:06:24.818 ******** 2026-01-05 00:59:16.942348 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.942355 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.942362 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.942370 | orchestrator | 2026-01-05 00:59:16.942377 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-05 00:59:16.942383 | orchestrator | Monday 05 January 2026 00:58:28 +0000 (0:00:02.303) 0:06:27.121 ******** 2026-01-05 00:59:16.942390 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942397 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942404 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942411 | orchestrator | 2026-01-05 00:59:16.942418 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-05 00:59:16.942430 | orchestrator | Monday 05 January 2026 00:58:29 +0000 (0:00:00.370) 0:06:27.492 ******** 2026-01-05 00:59:16.942437 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942444 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942451 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942458 | orchestrator | 2026-01-05 00:59:16.942465 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-05 00:59:16.942472 | orchestrator | Monday 05 January 2026 00:58:29 +0000 (0:00:00.338) 0:06:27.830 ******** 2026-01-05 00:59:16.942479 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942485 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942492 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942500 | orchestrator | 2026-01-05 00:59:16.942506 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-05 00:59:16.942513 | orchestrator | Monday 05 January 2026 00:58:30 +0000 (0:00:00.823) 0:06:28.654 ******** 2026-01-05 00:59:16.942520 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942526 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942534 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942541 | orchestrator | 2026-01-05 00:59:16.942548 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-05 00:59:16.942555 | orchestrator | Monday 05 January 2026 00:58:30 +0000 (0:00:00.375) 0:06:29.030 ******** 2026-01-05 00:59:16.942562 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942569 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942576 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942583 | orchestrator | 2026-01-05 00:59:16.942589 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-05 00:59:16.942596 | orchestrator | Monday 05 January 2026 00:58:31 +0000 (0:00:00.368) 0:06:29.399 ******** 2026-01-05 00:59:16.942603 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.942610 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.942617 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.942624 | orchestrator | 2026-01-05 00:59:16.942631 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-05 00:59:16.942638 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.883) 0:06:30.282 ******** 2026-01-05 00:59:16.942645 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942652 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.942659 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.942666 | orchestrator | 2026-01-05 00:59:16.942673 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-05 00:59:16.942679 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.728) 0:06:31.010 ******** 2026-01-05 00:59:16.942688 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942696 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.942703 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.942709 | orchestrator | 2026-01-05 00:59:16.942716 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-05 00:59:16.942723 | orchestrator | Monday 05 January 2026 00:58:33 +0000 (0:00:00.361) 0:06:31.372 ******** 2026-01-05 00:59:16.942730 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942737 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.942744 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.942754 | orchestrator | 2026-01-05 00:59:16.942762 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-05 00:59:16.942769 | orchestrator | Monday 05 January 2026 00:58:34 +0000 (0:00:00.945) 0:06:32.317 ******** 2026-01-05 00:59:16.942776 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942783 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.942789 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.942796 | orchestrator | 2026-01-05 00:59:16.942803 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-05 00:59:16.942810 | orchestrator | Monday 05 January 2026 00:58:35 +0000 (0:00:01.440) 0:06:33.757 ******** 2026-01-05 00:59:16.942822 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942828 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.942835 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.942842 | orchestrator | 2026-01-05 00:59:16.942849 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-05 00:59:16.942856 | orchestrator | Monday 05 January 2026 00:58:36 +0000 (0:00:00.953) 0:06:34.711 ******** 2026-01-05 00:59:16.942863 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.942869 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.942876 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.942883 | orchestrator | 2026-01-05 00:59:16.942890 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-05 00:59:16.942897 | orchestrator | Monday 05 January 2026 00:58:41 +0000 (0:00:05.099) 0:06:39.811 ******** 2026-01-05 00:59:16.942904 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942911 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.942918 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.942925 | orchestrator | 2026-01-05 00:59:16.942932 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-05 00:59:16.942939 | orchestrator | Monday 05 January 2026 00:58:44 +0000 (0:00:02.832) 0:06:42.643 ******** 2026-01-05 00:59:16.942946 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.942954 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.942960 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.942967 | orchestrator | 2026-01-05 00:59:16.942974 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-05 00:59:16.942981 | orchestrator | Monday 05 January 2026 00:58:54 +0000 (0:00:10.236) 0:06:52.879 ******** 2026-01-05 00:59:16.942987 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.942998 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.943005 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.943012 | orchestrator | 2026-01-05 00:59:16.943018 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-05 00:59:16.943037 | orchestrator | Monday 05 January 2026 00:58:58 +0000 (0:00:04.254) 0:06:57.134 ******** 2026-01-05 00:59:16.943044 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:16.943050 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:16.943057 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:16.943064 | orchestrator | 2026-01-05 00:59:16.943071 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-05 00:59:16.943078 | orchestrator | Monday 05 January 2026 00:59:08 +0000 (0:00:09.480) 0:07:06.615 ******** 2026-01-05 00:59:16.943085 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.943092 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.943099 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.943105 | orchestrator | 2026-01-05 00:59:16.943112 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-05 00:59:16.943118 | orchestrator | Monday 05 January 2026 00:59:08 +0000 (0:00:00.386) 0:07:07.001 ******** 2026-01-05 00:59:16.943124 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.943131 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.943137 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.943144 | orchestrator | 2026-01-05 00:59:16.943151 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-05 00:59:16.943158 | orchestrator | Monday 05 January 2026 00:59:09 +0000 (0:00:00.383) 0:07:07.385 ******** 2026-01-05 00:59:16.943169 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.943176 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.943184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.943191 | orchestrator | 2026-01-05 00:59:16.943198 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-05 00:59:16.943205 | orchestrator | Monday 05 January 2026 00:59:09 +0000 (0:00:00.753) 0:07:08.138 ******** 2026-01-05 00:59:16.943212 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.943225 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.943232 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.943239 | orchestrator | 2026-01-05 00:59:16.943246 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-05 00:59:16.943253 | orchestrator | Monday 05 January 2026 00:59:10 +0000 (0:00:00.422) 0:07:08.561 ******** 2026-01-05 00:59:16.943260 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.943267 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.943274 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.943280 | orchestrator | 2026-01-05 00:59:16.943287 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-05 00:59:16.943294 | orchestrator | Monday 05 January 2026 00:59:10 +0000 (0:00:00.363) 0:07:08.924 ******** 2026-01-05 00:59:16.943301 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:59:16.943308 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:59:16.943315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:59:16.943322 | orchestrator | 2026-01-05 00:59:16.943328 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-05 00:59:16.943335 | orchestrator | Monday 05 January 2026 00:59:11 +0000 (0:00:00.374) 0:07:09.299 ******** 2026-01-05 00:59:16.943342 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.943349 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.943356 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.943363 | orchestrator | 2026-01-05 00:59:16.943370 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-05 00:59:16.943377 | orchestrator | Monday 05 January 2026 00:59:12 +0000 (0:00:01.480) 0:07:10.780 ******** 2026-01-05 00:59:16.943384 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:59:16.943391 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:59:16.943398 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:59:16.943404 | orchestrator | 2026-01-05 00:59:16.943420 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:59:16.943427 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:59:16.943434 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:59:16.943441 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:59:16.943448 | orchestrator | 2026-01-05 00:59:16.943455 | orchestrator | 2026-01-05 00:59:16.943462 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:59:16.943469 | orchestrator | Monday 05 January 2026 00:59:13 +0000 (0:00:00.892) 0:07:11.673 ******** 2026-01-05 00:59:16.943476 | orchestrator | =============================================================================== 2026-01-05 00:59:16.943483 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.24s 2026-01-05 00:59:16.943489 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.48s 2026-01-05 00:59:16.943496 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.25s 2026-01-05 00:59:16.943503 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.97s 2026-01-05 00:59:16.943510 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.76s 2026-01-05 00:59:16.943517 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.65s 2026-01-05 00:59:16.943524 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.94s 2026-01-05 00:59:16.943531 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.91s 2026-01-05 00:59:16.943538 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.36s 2026-01-05 00:59:16.943549 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.18s 2026-01-05 00:59:16.943561 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.10s 2026-01-05 00:59:16.943568 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.85s 2026-01-05 00:59:16.943574 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.78s 2026-01-05 00:59:16.943581 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.60s 2026-01-05 00:59:16.943588 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.59s 2026-01-05 00:59:16.943595 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.48s 2026-01-05 00:59:16.943602 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.46s 2026-01-05 00:59:16.943608 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.41s 2026-01-05 00:59:16.943615 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.30s 2026-01-05 00:59:16.943622 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.26s 2026-01-05 00:59:16.943630 | orchestrator | 2026-01-05 00:59:16 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:16.943637 | orchestrator | 2026-01-05 00:59:16 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:16.943644 | orchestrator | 2026-01-05 00:59:16 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:16.943651 | orchestrator | 2026-01-05 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:19.974499 | orchestrator | 2026-01-05 00:59:19 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:19.976546 | orchestrator | 2026-01-05 00:59:19 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:19.977961 | orchestrator | 2026-01-05 00:59:19 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:19.978153 | orchestrator | 2026-01-05 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:23.018451 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:23.020724 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:23.022416 | orchestrator | 2026-01-05 00:59:23 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:23.022440 | orchestrator | 2026-01-05 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:26.069855 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:26.070174 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:26.071425 | orchestrator | 2026-01-05 00:59:26 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:26.071478 | orchestrator | 2026-01-05 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:29.119888 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:29.123577 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:29.124937 | orchestrator | 2026-01-05 00:59:29 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:29.124982 | orchestrator | 2026-01-05 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:32.159499 | orchestrator | 2026-01-05 00:59:32 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:32.159814 | orchestrator | 2026-01-05 00:59:32 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:32.160831 | orchestrator | 2026-01-05 00:59:32 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:32.160862 | orchestrator | 2026-01-05 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:35.219047 | orchestrator | 2026-01-05 00:59:35 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:35.219213 | orchestrator | 2026-01-05 00:59:35 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:35.220899 | orchestrator | 2026-01-05 00:59:35 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:35.220952 | orchestrator | 2026-01-05 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:38.285938 | orchestrator | 2026-01-05 00:59:38 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:38.286388 | orchestrator | 2026-01-05 00:59:38 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:38.288034 | orchestrator | 2026-01-05 00:59:38 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:38.288062 | orchestrator | 2026-01-05 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:41.348474 | orchestrator | 2026-01-05 00:59:41 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:41.351733 | orchestrator | 2026-01-05 00:59:41 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:41.351796 | orchestrator | 2026-01-05 00:59:41 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:41.351806 | orchestrator | 2026-01-05 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:44.388306 | orchestrator | 2026-01-05 00:59:44 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:44.388417 | orchestrator | 2026-01-05 00:59:44 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:44.388914 | orchestrator | 2026-01-05 00:59:44 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:44.388945 | orchestrator | 2026-01-05 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:47.454211 | orchestrator | 2026-01-05 00:59:47 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:47.456599 | orchestrator | 2026-01-05 00:59:47 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:47.458558 | orchestrator | 2026-01-05 00:59:47 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:47.459718 | orchestrator | 2026-01-05 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:50.513929 | orchestrator | 2026-01-05 00:59:50 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:50.518273 | orchestrator | 2026-01-05 00:59:50 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:50.519927 | orchestrator | 2026-01-05 00:59:50 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:50.520044 | orchestrator | 2026-01-05 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:53.569074 | orchestrator | 2026-01-05 00:59:53 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:53.569947 | orchestrator | 2026-01-05 00:59:53 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:53.570998 | orchestrator | 2026-01-05 00:59:53 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:53.571045 | orchestrator | 2026-01-05 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:56.620105 | orchestrator | 2026-01-05 00:59:56 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:56.624927 | orchestrator | 2026-01-05 00:59:56 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:56.628587 | orchestrator | 2026-01-05 00:59:56 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:56.629449 | orchestrator | 2026-01-05 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 00:59:59.680906 | orchestrator | 2026-01-05 00:59:59 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 00:59:59.682941 | orchestrator | 2026-01-05 00:59:59 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 00:59:59.685740 | orchestrator | 2026-01-05 00:59:59 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 00:59:59.685804 | orchestrator | 2026-01-05 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:02.731536 | orchestrator | 2026-01-05 01:00:02 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:02.733718 | orchestrator | 2026-01-05 01:00:02 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:02.735632 | orchestrator | 2026-01-05 01:00:02 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:02.735846 | orchestrator | 2026-01-05 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:05.787591 | orchestrator | 2026-01-05 01:00:05 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:05.790906 | orchestrator | 2026-01-05 01:00:05 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:05.792698 | orchestrator | 2026-01-05 01:00:05 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:05.792831 | orchestrator | 2026-01-05 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:08.840574 | orchestrator | 2026-01-05 01:00:08 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:08.841463 | orchestrator | 2026-01-05 01:00:08 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:08.844339 | orchestrator | 2026-01-05 01:00:08 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:08.844381 | orchestrator | 2026-01-05 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:11.887479 | orchestrator | 2026-01-05 01:00:11 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:11.888017 | orchestrator | 2026-01-05 01:00:11 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:11.889108 | orchestrator | 2026-01-05 01:00:11 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:11.889201 | orchestrator | 2026-01-05 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:14.947807 | orchestrator | 2026-01-05 01:00:14 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:14.950394 | orchestrator | 2026-01-05 01:00:14 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:14.952790 | orchestrator | 2026-01-05 01:00:14 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:14.952861 | orchestrator | 2026-01-05 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:17.994704 | orchestrator | 2026-01-05 01:00:17 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:17.996375 | orchestrator | 2026-01-05 01:00:17 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:17.997058 | orchestrator | 2026-01-05 01:00:17 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:17.997108 | orchestrator | 2026-01-05 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:21.062967 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:21.064238 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:21.065710 | orchestrator | 2026-01-05 01:00:21 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:21.065769 | orchestrator | 2026-01-05 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:24.107853 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:24.110626 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:24.111211 | orchestrator | 2026-01-05 01:00:24 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:24.112267 | orchestrator | 2026-01-05 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:27.161636 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:27.163583 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:27.165284 | orchestrator | 2026-01-05 01:00:27 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:27.165553 | orchestrator | 2026-01-05 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:30.219106 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:30.221661 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:30.223024 | orchestrator | 2026-01-05 01:00:30 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:30.223954 | orchestrator | 2026-01-05 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:33.259383 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:33.259442 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:33.260464 | orchestrator | 2026-01-05 01:00:33 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:33.260536 | orchestrator | 2026-01-05 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:36.305406 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:36.307178 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:36.308774 | orchestrator | 2026-01-05 01:00:36 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:36.308832 | orchestrator | 2026-01-05 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:39.366086 | orchestrator | 2026-01-05 01:00:39 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:39.369501 | orchestrator | 2026-01-05 01:00:39 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:39.381263 | orchestrator | 2026-01-05 01:00:39 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:39.382337 | orchestrator | 2026-01-05 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:42.442743 | orchestrator | 2026-01-05 01:00:42 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:42.443090 | orchestrator | 2026-01-05 01:00:42 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:42.444014 | orchestrator | 2026-01-05 01:00:42 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:42.444042 | orchestrator | 2026-01-05 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:45.494276 | orchestrator | 2026-01-05 01:00:45 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:45.496354 | orchestrator | 2026-01-05 01:00:45 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:45.497957 | orchestrator | 2026-01-05 01:00:45 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:45.498198 | orchestrator | 2026-01-05 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:48.545522 | orchestrator | 2026-01-05 01:00:48 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:48.546810 | orchestrator | 2026-01-05 01:00:48 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:48.548128 | orchestrator | 2026-01-05 01:00:48 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:48.548160 | orchestrator | 2026-01-05 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:51.598995 | orchestrator | 2026-01-05 01:00:51 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:51.600344 | orchestrator | 2026-01-05 01:00:51 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:51.602764 | orchestrator | 2026-01-05 01:00:51 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:51.602998 | orchestrator | 2026-01-05 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:54.653475 | orchestrator | 2026-01-05 01:00:54 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:54.654946 | orchestrator | 2026-01-05 01:00:54 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:54.656839 | orchestrator | 2026-01-05 01:00:54 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:54.656951 | orchestrator | 2026-01-05 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:00:57.710277 | orchestrator | 2026-01-05 01:00:57 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:00:57.712951 | orchestrator | 2026-01-05 01:00:57 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:00:57.714904 | orchestrator | 2026-01-05 01:00:57 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:00:57.714960 | orchestrator | 2026-01-05 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:00.769487 | orchestrator | 2026-01-05 01:01:00 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:00.771326 | orchestrator | 2026-01-05 01:01:00 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:00.773550 | orchestrator | 2026-01-05 01:01:00 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:00.773615 | orchestrator | 2026-01-05 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:03.820621 | orchestrator | 2026-01-05 01:01:03 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:03.822523 | orchestrator | 2026-01-05 01:01:03 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:03.824279 | orchestrator | 2026-01-05 01:01:03 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:03.824335 | orchestrator | 2026-01-05 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:06.876441 | orchestrator | 2026-01-05 01:01:06 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:06.877909 | orchestrator | 2026-01-05 01:01:06 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:06.879368 | orchestrator | 2026-01-05 01:01:06 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:06.879422 | orchestrator | 2026-01-05 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:09.932316 | orchestrator | 2026-01-05 01:01:09 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:09.935035 | orchestrator | 2026-01-05 01:01:09 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:09.938139 | orchestrator | 2026-01-05 01:01:09 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:09.938212 | orchestrator | 2026-01-05 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:12.982485 | orchestrator | 2026-01-05 01:01:12 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:12.984956 | orchestrator | 2026-01-05 01:01:12 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:12.986121 | orchestrator | 2026-01-05 01:01:12 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:12.986165 | orchestrator | 2026-01-05 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:16.035717 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:16.037314 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:16.038107 | orchestrator | 2026-01-05 01:01:16 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:16.038144 | orchestrator | 2026-01-05 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:19.082305 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:19.083157 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:19.084162 | orchestrator | 2026-01-05 01:01:19 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:19.084454 | orchestrator | 2026-01-05 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:22.128128 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:22.130109 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:22.131692 | orchestrator | 2026-01-05 01:01:22 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:22.131743 | orchestrator | 2026-01-05 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:25.182241 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:25.182924 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:25.184481 | orchestrator | 2026-01-05 01:01:25 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state STARTED 2026-01-05 01:01:25.184590 | orchestrator | 2026-01-05 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:28.232931 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:28.235145 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:28.237592 | orchestrator | 2026-01-05 01:01:28 | INFO  | Task 41c5898b-a017-42ce-b3f3-a59db613cf71 is in state SUCCESS 2026-01-05 01:01:28.239850 | orchestrator | 2026-01-05 01:01:28.239919 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:01:28.239928 | orchestrator | 2.16.14 2026-01-05 01:01:28.239935 | orchestrator | 2026-01-05 01:01:28.239941 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-05 01:01:28.239949 | orchestrator | 2026-01-05 01:01:28.239955 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 01:01:28.239963 | orchestrator | Monday 05 January 2026 00:49:24 +0000 (0:00:00.778) 0:00:00.778 ******** 2026-01-05 01:01:28.240000 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.240010 | orchestrator | 2026-01-05 01:01:28.240016 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 01:01:28.240023 | orchestrator | Monday 05 January 2026 00:49:26 +0000 (0:00:01.259) 0:00:02.038 ******** 2026-01-05 01:01:28.240029 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240047 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240054 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240061 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240068 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240075 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240081 | orchestrator | 2026-01-05 01:01:28.240087 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 01:01:28.240094 | orchestrator | Monday 05 January 2026 00:49:27 +0000 (0:00:01.461) 0:00:03.499 ******** 2026-01-05 01:01:28.240101 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240107 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240114 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240120 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240127 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240299 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240314 | orchestrator | 2026-01-05 01:01:28.240318 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 01:01:28.240323 | orchestrator | Monday 05 January 2026 00:49:28 +0000 (0:00:00.819) 0:00:04.319 ******** 2026-01-05 01:01:28.240327 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240331 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240335 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240339 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240362 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240366 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240370 | orchestrator | 2026-01-05 01:01:28.240374 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 01:01:28.240378 | orchestrator | Monday 05 January 2026 00:49:29 +0000 (0:00:01.210) 0:00:05.529 ******** 2026-01-05 01:01:28.240382 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240386 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240390 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240393 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240397 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240402 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240405 | orchestrator | 2026-01-05 01:01:28.240409 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 01:01:28.240413 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:00.886) 0:00:06.416 ******** 2026-01-05 01:01:28.240417 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240421 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240424 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240428 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240441 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240445 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240449 | orchestrator | 2026-01-05 01:01:28.240453 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 01:01:28.240457 | orchestrator | Monday 05 January 2026 00:49:30 +0000 (0:00:00.592) 0:00:07.008 ******** 2026-01-05 01:01:28.240460 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240464 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240468 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240471 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240475 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240479 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240482 | orchestrator | 2026-01-05 01:01:28.240486 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 01:01:28.240490 | orchestrator | Monday 05 January 2026 00:49:31 +0000 (0:00:00.856) 0:00:07.865 ******** 2026-01-05 01:01:28.240494 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.240499 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.240503 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.240507 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.240510 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.240514 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.240518 | orchestrator | 2026-01-05 01:01:28.240524 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 01:01:28.240530 | orchestrator | Monday 05 January 2026 00:49:32 +0000 (0:00:00.848) 0:00:08.714 ******** 2026-01-05 01:01:28.240536 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240547 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.240553 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.240562 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.240568 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.240574 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.240579 | orchestrator | 2026-01-05 01:01:28.240585 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 01:01:28.240591 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:00.863) 0:00:09.577 ******** 2026-01-05 01:01:28.240596 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:01:28.240602 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:01:28.240608 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:01:28.240614 | orchestrator | 2026-01-05 01:01:28.240621 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 01:01:28.240627 | orchestrator | Monday 05 January 2026 00:49:34 +0000 (0:00:00.914) 0:00:10.492 ******** 2026-01-05 01:01:28.240640 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.240646 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.241247 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.241341 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.241354 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.241360 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.241366 | orchestrator | 2026-01-05 01:01:28.241372 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 01:01:28.241380 | orchestrator | Monday 05 January 2026 00:49:35 +0000 (0:00:01.409) 0:00:11.902 ******** 2026-01-05 01:01:28.241386 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:01:28.241392 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:01:28.241399 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:01:28.241405 | orchestrator | 2026-01-05 01:01:28.241411 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 01:01:28.241417 | orchestrator | Monday 05 January 2026 00:49:39 +0000 (0:00:04.082) 0:00:15.984 ******** 2026-01-05 01:01:28.241424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:01:28.241719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:01:28.241740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:01:28.241745 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.241749 | orchestrator | 2026-01-05 01:01:28.241753 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 01:01:28.241757 | orchestrator | Monday 05 January 2026 00:49:41 +0000 (0:00:01.724) 0:00:17.708 ******** 2026-01-05 01:01:28.241764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241779 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.241783 | orchestrator | 2026-01-05 01:01:28.241786 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 01:01:28.241790 | orchestrator | Monday 05 January 2026 00:49:42 +0000 (0:00:00.773) 0:00:18.482 ******** 2026-01-05 01:01:28.241824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241857 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.241861 | orchestrator | 2026-01-05 01:01:28.241871 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 01:01:28.241875 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:00.625) 0:00:19.108 ******** 2026-01-05 01:01:28.241901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 00:49:37.019404', 'end': '2026-01-05 00:49:37.356027', 'delta': '0:00:00.336623', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241909 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 00:49:37.972395', 'end': '2026-01-05 00:49:38.301530', 'delta': '0:00:00.329135', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 00:49:39.074141', 'end': '2026-01-05 00:49:39.335510', 'delta': '0:00:00.261369', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.241917 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.241921 | orchestrator | 2026-01-05 01:01:28.241925 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 01:01:28.241929 | orchestrator | Monday 05 January 2026 00:49:43 +0000 (0:00:00.272) 0:00:19.381 ******** 2026-01-05 01:01:28.241932 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.241936 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.241940 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.241944 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.241948 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.241951 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.241955 | orchestrator | 2026-01-05 01:01:28.241959 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 01:01:28.241963 | orchestrator | Monday 05 January 2026 00:49:45 +0000 (0:00:02.633) 0:00:22.014 ******** 2026-01-05 01:01:28.241967 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.241970 | orchestrator | 2026-01-05 01:01:28.241978 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 01:01:28.241984 | orchestrator | Monday 05 January 2026 00:49:46 +0000 (0:00:00.954) 0:00:22.969 ******** 2026-01-05 01:01:28.241995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.241999 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242002 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242006 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242010 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242044 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242048 | orchestrator | 2026-01-05 01:01:28.242052 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 01:01:28.242056 | orchestrator | Monday 05 January 2026 00:49:48 +0000 (0:00:02.028) 0:00:24.998 ******** 2026-01-05 01:01:28.242062 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242068 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242075 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242081 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242088 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242094 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242100 | orchestrator | 2026-01-05 01:01:28.242106 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:01:28.242113 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:02.197) 0:00:27.196 ******** 2026-01-05 01:01:28.242119 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242125 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242132 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242138 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242144 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242151 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242157 | orchestrator | 2026-01-05 01:01:28.242229 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 01:01:28.242237 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:01.428) 0:00:28.624 ******** 2026-01-05 01:01:28.242243 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242250 | orchestrator | 2026-01-05 01:01:28.242257 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 01:01:28.242263 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:00.128) 0:00:28.753 ******** 2026-01-05 01:01:28.242269 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242276 | orchestrator | 2026-01-05 01:01:28.242292 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:01:28.242299 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:00.250) 0:00:29.003 ******** 2026-01-05 01:01:28.242518 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242532 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242537 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242557 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242562 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242569 | orchestrator | 2026-01-05 01:01:28.242573 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 01:01:28.242577 | orchestrator | Monday 05 January 2026 00:49:54 +0000 (0:00:01.191) 0:00:30.194 ******** 2026-01-05 01:01:28.242581 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242585 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242590 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242596 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242605 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242609 | orchestrator | 2026-01-05 01:01:28.242612 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 01:01:28.242616 | orchestrator | Monday 05 January 2026 00:49:55 +0000 (0:00:01.648) 0:00:31.843 ******** 2026-01-05 01:01:28.242620 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242624 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242627 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242640 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242644 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242648 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242652 | orchestrator | 2026-01-05 01:01:28.242656 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 01:01:28.242660 | orchestrator | Monday 05 January 2026 00:49:56 +0000 (0:00:01.037) 0:00:32.881 ******** 2026-01-05 01:01:28.242663 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242668 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242674 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242679 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242685 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242691 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242698 | orchestrator | 2026-01-05 01:01:28.242704 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 01:01:28.242710 | orchestrator | Monday 05 January 2026 00:49:57 +0000 (0:00:00.840) 0:00:33.721 ******** 2026-01-05 01:01:28.242716 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242723 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242729 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242736 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242742 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242749 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242755 | orchestrator | 2026-01-05 01:01:28.242761 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 01:01:28.242768 | orchestrator | Monday 05 January 2026 00:49:58 +0000 (0:00:00.789) 0:00:34.511 ******** 2026-01-05 01:01:28.242774 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.242781 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.242788 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.242889 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.242906 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.242913 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.242946 | orchestrator | 2026-01-05 01:01:28.242955 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 01:01:28.242961 | orchestrator | Monday 05 January 2026 00:49:59 +0000 (0:00:01.114) 0:00:35.626 ******** 2026-01-05 01:01:28.242969 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.243101 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.243112 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.243116 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.243120 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.243123 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.243127 | orchestrator | 2026-01-05 01:01:28.243131 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 01:01:28.243137 | orchestrator | Monday 05 January 2026 00:50:01 +0000 (0:00:01.589) 0:00:37.216 ******** 2026-01-05 01:01:28.243145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870', 'dm-uuid-LVM-MRS6l1IAkKZkcgde5V97M1EMcnMqW3KrWMak6G2cCTR1eTmdrPCzGKQ7dp26Sw0L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d', 'dm-uuid-LVM-dMSf1iDZpYOiEcelFI9OhV4BqXMF9J3XuaegpFaqFBpSVeWjMCdZGLJXaFwDWJkJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf', 'dm-uuid-LVM-Y1ILTfcYxwsemW78hlDn0ywfi8DN4JXxhHxIRulY0sc7u2rAebOgnUYbiPFpUItE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0', 'dm-uuid-LVM-GYepXQFoGtbQElW2LEnFOoJC2SC8ItgfMcQViTHK0hYiatEG3Gclkza6tpiTXAMc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac', 'dm-uuid-LVM-NJJ3mj0110hGanpgAn0DfkDe3aCEbZl6SsBfXOJX0Fmboc6CeLEDMr6ptd0ICwRT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LElmMj-QxHX-v7CL-WeUG-BWYV-FdPv-dF20Gl', 'scsi-0QEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e', 'scsi-SQEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92', 'dm-uuid-LVM-Uy1gt3vDGof4bxOmSu3qFRdyPeKP9BsyAft6rhxnraj1pJ9uZtmBjigQE0gTXBC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xGBT5x-8Tbz-PsiS-It5s-MMN8-JZB0-adaZAB', 'scsi-0QEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c', 'scsi-SQEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725', 'scsi-SQEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243452 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.243456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdmmZx-gddZ-3NQk-p78B-1iPq-ZrZ7-RfMK3x', 'scsi-0QEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a', 'scsi-SQEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c3mc6Y-izxE-ZkGV-iJVS-rMd1-Ah2v-MsRqAm', 'scsi-0QEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d', 'scsi-SQEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051', 'scsi-SQEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2RR5of-j2i6-Eldl-JMfj-d8cv-dWlx-QICqMn', 'scsi-0QEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da', 'scsi-SQEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvzYZd-l3rJ-Ej6t-6vq8-YsXl-wCLG-UHGvYS', 'scsi-0QEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf', 'scsi-SQEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b', 'scsi-SQEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part1', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part14', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part15', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part16', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.243670 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.243676 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.243683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243690 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.243697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.243711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.244105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.244128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244132 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.244137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:01:28.244181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.244198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:01:28.244203 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.244207 | orchestrator | 2026-01-05 01:01:28.244211 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 01:01:28.244215 | orchestrator | Monday 05 January 2026 00:50:03 +0000 (0:00:02.567) 0:00:39.783 ******** 2026-01-05 01:01:28.244220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870', 'dm-uuid-LVM-MRS6l1IAkKZkcgde5V97M1EMcnMqW3KrWMak6G2cCTR1eTmdrPCzGKQ7dp26Sw0L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d', 'dm-uuid-LVM-dMSf1iDZpYOiEcelFI9OhV4BqXMF9J3XuaegpFaqFBpSVeWjMCdZGLJXaFwDWJkJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf', 'dm-uuid-LVM-Y1ILTfcYxwsemW78hlDn0ywfi8DN4JXxhHxIRulY0sc7u2rAebOgnUYbiPFpUItE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0', 'dm-uuid-LVM-GYepXQFoGtbQElW2LEnFOoJC2SC8ItgfMcQViTHK0hYiatEG3Gclkza6tpiTXAMc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LElmMj-QxHX-v7CL-WeUG-BWYV-FdPv-dF20Gl', 'scsi-0QEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e', 'scsi-SQEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xGBT5x-8Tbz-PsiS-It5s-MMN8-JZB0-adaZAB', 'scsi-0QEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c', 'scsi-SQEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725', 'scsi-SQEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac', 'dm-uuid-LVM-NJJ3mj0110hGanpgAn0DfkDe3aCEbZl6SsBfXOJX0Fmboc6CeLEDMr6ptd0ICwRT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92', 'dm-uuid-LVM-Uy1gt3vDGof4bxOmSu3qFRdyPeKP9BsyAft6rhxnraj1pJ9uZtmBjigQE0gTXBC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.244979 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.244987 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdmmZx-gddZ-3NQk-p78B-1iPq-ZrZ7-RfMK3x', 'scsi-0QEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a', 'scsi-SQEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245020 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c3mc6Y-izxE-ZkGV-iJVS-rMd1-Ah2v-MsRqAm', 'scsi-0QEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d', 'scsi-SQEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245032 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245037 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245060 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245064 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245290 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245455 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051', 'scsi-SQEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245525 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2RR5of-j2i6-Eldl-JMfj-d8cv-dWlx-QICqMn', 'scsi-0QEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da', 'scsi-SQEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part1', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part14', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part15', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part16', 'scsi-SQEMU_QEMU_HARDDISK_34fdbb66-233c-4628-9399-a3b3dd90abc2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245564 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvzYZd-l3rJ-Ej6t-6vq8-YsXl-wCLG-UHGvYS', 'scsi-0QEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf', 'scsi-SQEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b', 'scsi-SQEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245597 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245602 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245606 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245613 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245617 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245621 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245628 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.245644 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245649 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.245652 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.245656 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245664 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_5b0a8530-6c77-4769-a703-fe762948c9fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245683 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245688 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.245692 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245696 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245700 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245707 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245711 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245719 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245733 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245765 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245778 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_9af08ba0-0250-48f3-ad13-298a6ecbf4d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245790 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:01:28.245813 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.245817 | orchestrator | 2026-01-05 01:01:28.245835 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 01:01:28.245840 | orchestrator | Monday 05 January 2026 00:50:07 +0000 (0:00:03.804) 0:00:43.587 ******** 2026-01-05 01:01:28.245845 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.245883 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.245888 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.245892 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.245896 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.245900 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.245941 | orchestrator | 2026-01-05 01:01:28.245945 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 01:01:28.245949 | orchestrator | Monday 05 January 2026 00:50:10 +0000 (0:00:02.629) 0:00:46.216 ******** 2026-01-05 01:01:28.245999 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.246005 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.246009 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.246200 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.246213 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.246219 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.246225 | orchestrator | 2026-01-05 01:01:28.246232 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:01:28.246269 | orchestrator | Monday 05 January 2026 00:50:11 +0000 (0:00:00.981) 0:00:47.198 ******** 2026-01-05 01:01:28.246275 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.246416 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.246742 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.246753 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.246757 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.246762 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.246766 | orchestrator | 2026-01-05 01:01:28.246771 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:01:28.246775 | orchestrator | Monday 05 January 2026 00:50:13 +0000 (0:00:02.036) 0:00:49.235 ******** 2026-01-05 01:01:28.246780 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.246784 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.246810 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.246818 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.246824 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.246830 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.246835 | orchestrator | 2026-01-05 01:01:28.246841 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:01:28.246847 | orchestrator | Monday 05 January 2026 00:50:14 +0000 (0:00:01.112) 0:00:50.347 ******** 2026-01-05 01:01:28.246854 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.246859 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.246865 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.246881 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.246888 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.246894 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.246900 | orchestrator | 2026-01-05 01:01:28.246906 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:01:28.246912 | orchestrator | Monday 05 January 2026 00:50:16 +0000 (0:00:02.520) 0:00:52.868 ******** 2026-01-05 01:01:28.246918 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.246924 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.246930 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.246936 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.246943 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.246949 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.246955 | orchestrator | 2026-01-05 01:01:28.246967 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 01:01:28.246975 | orchestrator | Monday 05 January 2026 00:50:17 +0000 (0:00:01.135) 0:00:54.003 ******** 2026-01-05 01:01:28.246980 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 01:01:28.246984 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 01:01:28.246988 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 01:01:28.246992 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 01:01:28.246996 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:01:28.247000 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 01:01:28.247003 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 01:01:28.247007 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 01:01:28.247011 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 01:01:28.247015 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-05 01:01:28.247018 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 01:01:28.247022 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 01:01:28.247026 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 01:01:28.247030 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-05 01:01:28.247033 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-05 01:01:28.247037 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-05 01:01:28.247041 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-05 01:01:28.247045 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-05 01:01:28.247049 | orchestrator | 2026-01-05 01:01:28.247052 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 01:01:28.247056 | orchestrator | Monday 05 January 2026 00:50:22 +0000 (0:00:04.251) 0:00:58.255 ******** 2026-01-05 01:01:28.247060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:01:28.247064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:01:28.247068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:01:28.247071 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247075 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 01:01:28.247079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 01:01:28.247083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 01:01:28.247087 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.247090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 01:01:28.247130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 01:01:28.247139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 01:01:28.247146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:01:28.247153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:01:28.247159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:01:28.247173 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.247179 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 01:01:28.247186 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 01:01:28.247192 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 01:01:28.247199 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.247205 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.247211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 01:01:28.247217 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 01:01:28.247223 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 01:01:28.247229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.247234 | orchestrator | 2026-01-05 01:01:28.247240 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 01:01:28.247247 | orchestrator | Monday 05 January 2026 00:50:23 +0000 (0:00:01.515) 0:00:59.771 ******** 2026-01-05 01:01:28.247253 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.247258 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.247263 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.247270 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.247276 | orchestrator | 2026-01-05 01:01:28.247282 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 01:01:28.247290 | orchestrator | Monday 05 January 2026 00:50:24 +0000 (0:00:01.164) 0:01:00.935 ******** 2026-01-05 01:01:28.247295 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247301 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.247307 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.247312 | orchestrator | 2026-01-05 01:01:28.247318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 01:01:28.247325 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.461) 0:01:01.396 ******** 2026-01-05 01:01:28.247330 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247336 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.247342 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.247347 | orchestrator | 2026-01-05 01:01:28.247353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 01:01:28.247359 | orchestrator | Monday 05 January 2026 00:50:25 +0000 (0:00:00.387) 0:01:01.784 ******** 2026-01-05 01:01:28.247365 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247371 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.247377 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.247384 | orchestrator | 2026-01-05 01:01:28.247390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 01:01:28.247402 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:00.504) 0:01:02.288 ******** 2026-01-05 01:01:28.247409 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.247416 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.247422 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.247428 | orchestrator | 2026-01-05 01:01:28.247434 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 01:01:28.247441 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:00.451) 0:01:02.740 ******** 2026-01-05 01:01:28.247447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.247453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.247459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.247466 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247472 | orchestrator | 2026-01-05 01:01:28.247479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 01:01:28.247485 | orchestrator | Monday 05 January 2026 00:50:27 +0000 (0:00:00.382) 0:01:03.123 ******** 2026-01-05 01:01:28.247498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.247504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.247510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.247517 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247523 | orchestrator | 2026-01-05 01:01:28.247528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 01:01:28.247535 | orchestrator | Monday 05 January 2026 00:50:27 +0000 (0:00:00.380) 0:01:03.504 ******** 2026-01-05 01:01:28.247541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.247547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.247553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.247559 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247565 | orchestrator | 2026-01-05 01:01:28.247571 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 01:01:28.247577 | orchestrator | Monday 05 January 2026 00:50:27 +0000 (0:00:00.388) 0:01:03.892 ******** 2026-01-05 01:01:28.247583 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.247589 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.247595 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.247601 | orchestrator | 2026-01-05 01:01:28.247608 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 01:01:28.247615 | orchestrator | Monday 05 January 2026 00:50:28 +0000 (0:00:00.611) 0:01:04.504 ******** 2026-01-05 01:01:28.247621 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:01:28.247627 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 01:01:28.247673 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 01:01:28.247680 | orchestrator | 2026-01-05 01:01:28.247684 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 01:01:28.247688 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:00.972) 0:01:05.477 ******** 2026-01-05 01:01:28.247692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:01:28.247697 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:01:28.247702 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:01:28.247706 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:01:28.247710 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:01:28.247714 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:01:28.247717 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:01:28.247721 | orchestrator | 2026-01-05 01:01:28.247725 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 01:01:28.247729 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:00.859) 0:01:06.337 ******** 2026-01-05 01:01:28.247735 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:01:28.247740 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:01:28.247746 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:01:28.247753 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:01:28.247759 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:01:28.247765 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:01:28.247771 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:01:28.247776 | orchestrator | 2026-01-05 01:01:28.247783 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.247818 | orchestrator | Monday 05 January 2026 00:50:32 +0000 (0:00:02.019) 0:01:08.356 ******** 2026-01-05 01:01:28.247826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.247834 | orchestrator | 2026-01-05 01:01:28.247840 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.247847 | orchestrator | Monday 05 January 2026 00:50:33 +0000 (0:00:01.584) 0:01:09.940 ******** 2026-01-05 01:01:28.247853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.247859 | orchestrator | 2026-01-05 01:01:28.247873 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.247880 | orchestrator | Monday 05 January 2026 00:50:35 +0000 (0:00:01.522) 0:01:11.463 ******** 2026-01-05 01:01:28.247886 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.247893 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.247899 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.247905 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.247911 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.247917 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.247923 | orchestrator | 2026-01-05 01:01:28.247929 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.247935 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:02.323) 0:01:13.787 ******** 2026-01-05 01:01:28.247940 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.247948 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.247954 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.247960 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.247967 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.247972 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.247978 | orchestrator | 2026-01-05 01:01:28.247985 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.247991 | orchestrator | Monday 05 January 2026 00:50:40 +0000 (0:00:02.769) 0:01:16.556 ******** 2026-01-05 01:01:28.247997 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248003 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248009 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248015 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248022 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248028 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248034 | orchestrator | 2026-01-05 01:01:28.248041 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.248047 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:01.972) 0:01:18.529 ******** 2026-01-05 01:01:28.248053 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248059 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248065 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248071 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248076 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248082 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248089 | orchestrator | 2026-01-05 01:01:28.248095 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.248102 | orchestrator | Monday 05 January 2026 00:50:44 +0000 (0:00:02.054) 0:01:20.584 ******** 2026-01-05 01:01:28.248109 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248116 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248123 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248130 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248137 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248187 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248195 | orchestrator | 2026-01-05 01:01:28.248201 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.248215 | orchestrator | Monday 05 January 2026 00:50:47 +0000 (0:00:02.663) 0:01:23.247 ******** 2026-01-05 01:01:28.248222 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248229 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248236 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248242 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248248 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248255 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248261 | orchestrator | 2026-01-05 01:01:28.248268 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.248274 | orchestrator | Monday 05 January 2026 00:50:48 +0000 (0:00:00.897) 0:01:24.144 ******** 2026-01-05 01:01:28.248281 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248287 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248294 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248300 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248307 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248313 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248319 | orchestrator | 2026-01-05 01:01:28.248325 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.248330 | orchestrator | Monday 05 January 2026 00:50:49 +0000 (0:00:01.074) 0:01:25.219 ******** 2026-01-05 01:01:28.248336 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248342 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248348 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248354 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248359 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248369 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248377 | orchestrator | 2026-01-05 01:01:28.248382 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.248388 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:01.384) 0:01:26.604 ******** 2026-01-05 01:01:28.248395 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248401 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248406 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248412 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248418 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248424 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248430 | orchestrator | 2026-01-05 01:01:28.248436 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.248442 | orchestrator | Monday 05 January 2026 00:50:52 +0000 (0:00:01.814) 0:01:28.419 ******** 2026-01-05 01:01:28.248448 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248454 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248461 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248467 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248473 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248479 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248485 | orchestrator | 2026-01-05 01:01:28.248492 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.248498 | orchestrator | Monday 05 January 2026 00:50:53 +0000 (0:00:01.023) 0:01:29.443 ******** 2026-01-05 01:01:28.248504 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248510 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248516 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248522 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248528 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248539 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248546 | orchestrator | 2026-01-05 01:01:28.248553 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.248559 | orchestrator | Monday 05 January 2026 00:50:54 +0000 (0:00:01.460) 0:01:30.904 ******** 2026-01-05 01:01:28.248565 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248572 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248581 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248585 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248588 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248592 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248596 | orchestrator | 2026-01-05 01:01:28.248600 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.248603 | orchestrator | Monday 05 January 2026 00:50:55 +0000 (0:00:00.986) 0:01:31.890 ******** 2026-01-05 01:01:28.248607 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248611 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248615 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248618 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248622 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248626 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248630 | orchestrator | 2026-01-05 01:01:28.248634 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.248638 | orchestrator | Monday 05 January 2026 00:50:57 +0000 (0:00:01.602) 0:01:33.493 ******** 2026-01-05 01:01:28.248643 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248649 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248655 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248661 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248667 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248673 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248679 | orchestrator | 2026-01-05 01:01:28.248685 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.248691 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:01.254) 0:01:34.748 ******** 2026-01-05 01:01:28.248697 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248703 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248710 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248716 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248722 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248729 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248735 | orchestrator | 2026-01-05 01:01:28.248741 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.248747 | orchestrator | Monday 05 January 2026 00:50:59 +0000 (0:00:00.924) 0:01:35.672 ******** 2026-01-05 01:01:28.248753 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248759 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248765 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248771 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.248837 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.248846 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.248852 | orchestrator | 2026-01-05 01:01:28.248859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.248866 | orchestrator | Monday 05 January 2026 00:51:00 +0000 (0:00:00.677) 0:01:36.350 ******** 2026-01-05 01:01:28.248872 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.248879 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.248884 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.248887 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248891 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248895 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248898 | orchestrator | 2026-01-05 01:01:28.248902 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.248906 | orchestrator | Monday 05 January 2026 00:51:01 +0000 (0:00:00.942) 0:01:37.293 ******** 2026-01-05 01:01:28.248910 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248913 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248917 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248921 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248925 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248928 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248937 | orchestrator | 2026-01-05 01:01:28.248941 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.248945 | orchestrator | Monday 05 January 2026 00:51:02 +0000 (0:00:00.749) 0:01:38.043 ******** 2026-01-05 01:01:28.248949 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.248952 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.248956 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.248960 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.248963 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.248967 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.248971 | orchestrator | 2026-01-05 01:01:28.248974 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-05 01:01:28.248978 | orchestrator | Monday 05 January 2026 00:51:03 +0000 (0:00:01.865) 0:01:39.908 ******** 2026-01-05 01:01:28.248982 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.248986 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.248990 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.248993 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.248997 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.249001 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.249004 | orchestrator | 2026-01-05 01:01:28.249008 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-05 01:01:28.249012 | orchestrator | Monday 05 January 2026 00:51:05 +0000 (0:00:01.996) 0:01:41.905 ******** 2026-01-05 01:01:28.249016 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.249019 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.249023 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.249027 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.249031 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.249034 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.249038 | orchestrator | 2026-01-05 01:01:28.249042 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-05 01:01:28.249046 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:03.004) 0:01:44.910 ******** 2026-01-05 01:01:28.249055 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.249060 | orchestrator | 2026-01-05 01:01:28.249064 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-05 01:01:28.249068 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:01.500) 0:01:46.410 ******** 2026-01-05 01:01:28.249072 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249075 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249079 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249083 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249086 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249090 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249094 | orchestrator | 2026-01-05 01:01:28.249098 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-05 01:01:28.249101 | orchestrator | Monday 05 January 2026 00:51:11 +0000 (0:00:00.844) 0:01:47.255 ******** 2026-01-05 01:01:28.249105 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249109 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249113 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249116 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249120 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249124 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249127 | orchestrator | 2026-01-05 01:01:28.249131 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-05 01:01:28.249135 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:01.228) 0:01:48.483 ******** 2026-01-05 01:01:28.249139 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:01:28.249142 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:01:28.249150 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:01:28.249153 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:01:28.249157 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:01:28.249161 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:01:28.249165 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:01:28.249169 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:01:28.249172 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:01:28.249176 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:01:28.249196 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:01:28.249200 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:01:28.249204 | orchestrator | 2026-01-05 01:01:28.249208 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-05 01:01:28.249211 | orchestrator | Monday 05 January 2026 00:51:14 +0000 (0:00:01.910) 0:01:50.394 ******** 2026-01-05 01:01:28.249215 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.249219 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.249223 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.249227 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.249230 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.249234 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.249238 | orchestrator | 2026-01-05 01:01:28.249242 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-05 01:01:28.249245 | orchestrator | Monday 05 January 2026 00:51:16 +0000 (0:00:01.940) 0:01:52.335 ******** 2026-01-05 01:01:28.249249 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249253 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249257 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249260 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249264 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249268 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249272 | orchestrator | 2026-01-05 01:01:28.249275 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-05 01:01:28.249279 | orchestrator | Monday 05 January 2026 00:51:17 +0000 (0:00:00.874) 0:01:53.209 ******** 2026-01-05 01:01:28.249283 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249287 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249290 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249294 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249298 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249302 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249306 | orchestrator | 2026-01-05 01:01:28.249309 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-05 01:01:28.249313 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:01.037) 0:01:54.247 ******** 2026-01-05 01:01:28.249317 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249321 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249324 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249328 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249332 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249336 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249339 | orchestrator | 2026-01-05 01:01:28.249343 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-05 01:01:28.249347 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.701) 0:01:54.949 ******** 2026-01-05 01:01:28.249354 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.249358 | orchestrator | 2026-01-05 01:01:28.249362 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-05 01:01:28.249366 | orchestrator | Monday 05 January 2026 00:51:20 +0000 (0:00:01.788) 0:01:56.738 ******** 2026-01-05 01:01:28.249377 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.249380 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.249384 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.249388 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.249392 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.249396 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.249399 | orchestrator | 2026-01-05 01:01:28.249403 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-05 01:01:28.249407 | orchestrator | Monday 05 January 2026 00:52:17 +0000 (0:00:56.391) 0:02:53.130 ******** 2026-01-05 01:01:28.249411 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:01:28.249415 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:01:28.249418 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:01:28.249422 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249426 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:01:28.249430 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:01:28.249434 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:01:28.249437 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249441 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:01:28.249445 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:01:28.249449 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:01:28.249453 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249456 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:01:28.249460 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:01:28.249464 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:01:28.249468 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:01:28.249472 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:01:28.249475 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:01:28.249479 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249483 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249499 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:01:28.249503 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:01:28.249507 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:01:28.249511 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249514 | orchestrator | 2026-01-05 01:01:28.249518 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-05 01:01:28.249522 | orchestrator | Monday 05 January 2026 00:52:18 +0000 (0:00:00.949) 0:02:54.079 ******** 2026-01-05 01:01:28.249526 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249530 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249534 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249537 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249541 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249549 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249552 | orchestrator | 2026-01-05 01:01:28.249556 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-05 01:01:28.249560 | orchestrator | Monday 05 January 2026 00:52:19 +0000 (0:00:01.288) 0:02:55.367 ******** 2026-01-05 01:01:28.249564 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249568 | orchestrator | 2026-01-05 01:01:28.249571 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-05 01:01:28.249575 | orchestrator | Monday 05 January 2026 00:52:19 +0000 (0:00:00.206) 0:02:55.574 ******** 2026-01-05 01:01:28.249579 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249583 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249586 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249590 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249594 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249598 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249602 | orchestrator | 2026-01-05 01:01:28.249605 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-05 01:01:28.249609 | orchestrator | Monday 05 January 2026 00:52:20 +0000 (0:00:00.863) 0:02:56.438 ******** 2026-01-05 01:01:28.249613 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249617 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249621 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249624 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249628 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249632 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249636 | orchestrator | 2026-01-05 01:01:28.249639 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-05 01:01:28.249643 | orchestrator | Monday 05 January 2026 00:52:21 +0000 (0:00:01.096) 0:02:57.534 ******** 2026-01-05 01:01:28.249647 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249651 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249655 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249658 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249662 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249666 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249670 | orchestrator | 2026-01-05 01:01:28.249673 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-05 01:01:28.249677 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:00.928) 0:02:58.463 ******** 2026-01-05 01:01:28.249681 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.249685 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.249688 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.249696 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.249700 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.249703 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.249707 | orchestrator | 2026-01-05 01:01:28.249711 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-05 01:01:28.249715 | orchestrator | Monday 05 January 2026 00:52:25 +0000 (0:00:03.114) 0:03:01.578 ******** 2026-01-05 01:01:28.249718 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.249722 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.249726 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.249730 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.249734 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.249737 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.249741 | orchestrator | 2026-01-05 01:01:28.249745 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-05 01:01:28.249749 | orchestrator | Monday 05 January 2026 00:52:26 +0000 (0:00:01.234) 0:03:02.813 ******** 2026-01-05 01:01:28.249753 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.249758 | orchestrator | 2026-01-05 01:01:28.249762 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-05 01:01:28.249772 | orchestrator | Monday 05 January 2026 00:52:28 +0000 (0:00:01.296) 0:03:04.110 ******** 2026-01-05 01:01:28.249776 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249780 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249784 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249787 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249810 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249814 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249818 | orchestrator | 2026-01-05 01:01:28.249822 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-05 01:01:28.249826 | orchestrator | Monday 05 January 2026 00:52:28 +0000 (0:00:00.861) 0:03:04.971 ******** 2026-01-05 01:01:28.249830 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249834 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249838 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249842 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249846 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249849 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249853 | orchestrator | 2026-01-05 01:01:28.249857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-05 01:01:28.249861 | orchestrator | Monday 05 January 2026 00:52:29 +0000 (0:00:00.873) 0:03:05.844 ******** 2026-01-05 01:01:28.249865 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249868 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249886 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249891 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249895 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249898 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249902 | orchestrator | 2026-01-05 01:01:28.249906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-05 01:01:28.249910 | orchestrator | Monday 05 January 2026 00:52:30 +0000 (0:00:00.789) 0:03:06.634 ******** 2026-01-05 01:01:28.249914 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249917 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249921 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249925 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249929 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249932 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249936 | orchestrator | 2026-01-05 01:01:28.249940 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-05 01:01:28.249944 | orchestrator | Monday 05 January 2026 00:52:31 +0000 (0:00:00.716) 0:03:07.350 ******** 2026-01-05 01:01:28.249948 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249951 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249955 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249959 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249963 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.249967 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.249970 | orchestrator | 2026-01-05 01:01:28.249974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-05 01:01:28.249978 | orchestrator | Monday 05 January 2026 00:52:32 +0000 (0:00:01.171) 0:03:08.522 ******** 2026-01-05 01:01:28.249982 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.249985 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.249989 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.249993 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.249997 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250000 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250004 | orchestrator | 2026-01-05 01:01:28.250008 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-05 01:01:28.250037 | orchestrator | Monday 05 January 2026 00:52:33 +0000 (0:00:00.809) 0:03:09.332 ******** 2026-01-05 01:01:28.250046 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250050 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250054 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250058 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250062 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250065 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250069 | orchestrator | 2026-01-05 01:01:28.250073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-05 01:01:28.250077 | orchestrator | Monday 05 January 2026 00:52:34 +0000 (0:00:00.831) 0:03:10.163 ******** 2026-01-05 01:01:28.250081 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250085 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250088 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250092 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250096 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250100 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250103 | orchestrator | 2026-01-05 01:01:28.250107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-05 01:01:28.250111 | orchestrator | Monday 05 January 2026 00:52:34 +0000 (0:00:00.691) 0:03:10.855 ******** 2026-01-05 01:01:28.250118 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.250122 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.250126 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.250131 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.250136 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.250142 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.250148 | orchestrator | 2026-01-05 01:01:28.250152 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-05 01:01:28.250156 | orchestrator | Monday 05 January 2026 00:52:36 +0000 (0:00:01.301) 0:03:12.156 ******** 2026-01-05 01:01:28.250160 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.250163 | orchestrator | 2026-01-05 01:01:28.250167 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-05 01:01:28.250215 | orchestrator | Monday 05 January 2026 00:52:37 +0000 (0:00:01.165) 0:03:13.322 ******** 2026-01-05 01:01:28.250220 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-05 01:01:28.250224 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-05 01:01:28.250228 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-05 01:01:28.250232 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-05 01:01:28.250236 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-05 01:01:28.250240 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-05 01:01:28.250244 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-05 01:01:28.250248 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-05 01:01:28.250251 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-05 01:01:28.250257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-05 01:01:28.250264 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-05 01:01:28.250268 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-05 01:01:28.250272 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-05 01:01:28.250275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-05 01:01:28.250279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-05 01:01:28.250283 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-05 01:01:28.250287 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-05 01:01:28.250290 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-05 01:01:28.250312 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-05 01:01:28.250321 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-05 01:01:28.250325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-05 01:01:28.250329 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-05 01:01:28.250333 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-05 01:01:28.250336 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-05 01:01:28.250340 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-05 01:01:28.250344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-05 01:01:28.250347 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-05 01:01:28.250351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-05 01:01:28.250355 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-05 01:01:28.250358 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-05 01:01:28.250362 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-05 01:01:28.250366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-05 01:01:28.250370 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-05 01:01:28.250373 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-05 01:01:28.250377 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-05 01:01:28.250381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-05 01:01:28.250384 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-05 01:01:28.250388 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-05 01:01:28.250392 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-05 01:01:28.250396 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:01:28.250400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-05 01:01:28.250403 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:01:28.250407 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-05 01:01:28.250411 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:01:28.250414 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-05 01:01:28.250418 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:01:28.250422 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:01:28.250426 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:01:28.250430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:01:28.250433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:01:28.250437 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:01:28.250441 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:01:28.250448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:01:28.250452 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:01:28.250456 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:01:28.250460 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:01:28.250463 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:01:28.250467 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:01:28.250471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:01:28.250475 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:01:28.250478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:01:28.250486 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:01:28.250490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:01:28.250494 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:01:28.250497 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:01:28.250501 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:01:28.250505 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:01:28.250509 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:01:28.250513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:01:28.250516 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:01:28.250520 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:01:28.250524 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:01:28.250528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:01:28.250531 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:01:28.250535 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:01:28.250539 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:01:28.250556 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:01:28.250560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:01:28.250564 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:01:28.250568 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:01:28.250572 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-05 01:01:28.250576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:01:28.250579 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-05 01:01:28.250583 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:01:28.250587 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:01:28.250591 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-05 01:01:28.250594 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-05 01:01:28.250598 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-05 01:01:28.250602 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-05 01:01:28.250606 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-05 01:01:28.250609 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-05 01:01:28.250613 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-05 01:01:28.250617 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:01:28.250621 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-05 01:01:28.250624 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-05 01:01:28.250628 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-05 01:01:28.250632 | orchestrator | 2026-01-05 01:01:28.250635 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-05 01:01:28.250639 | orchestrator | Monday 05 January 2026 00:52:45 +0000 (0:00:07.956) 0:03:21.278 ******** 2026-01-05 01:01:28.250643 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250647 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250651 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250655 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.250663 | orchestrator | 2026-01-05 01:01:28.250667 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-05 01:01:28.250671 | orchestrator | Monday 05 January 2026 00:52:46 +0000 (0:00:01.409) 0:03:22.688 ******** 2026-01-05 01:01:28.250675 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.250679 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.250687 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.250691 | orchestrator | 2026-01-05 01:01:28.250695 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-05 01:01:28.250699 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:01.625) 0:03:24.313 ******** 2026-01-05 01:01:28.250703 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.250707 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.250711 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.250714 | orchestrator | 2026-01-05 01:01:28.250718 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-05 01:01:28.250722 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:01.425) 0:03:25.738 ******** 2026-01-05 01:01:28.250726 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.250729 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.250733 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.250737 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250741 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250744 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250748 | orchestrator | 2026-01-05 01:01:28.250752 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-05 01:01:28.250756 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:00.545) 0:03:26.284 ******** 2026-01-05 01:01:28.250759 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.250763 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.250767 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.250771 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250775 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250778 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250782 | orchestrator | 2026-01-05 01:01:28.250786 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-05 01:01:28.250789 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.750) 0:03:27.035 ******** 2026-01-05 01:01:28.250805 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250809 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250813 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250817 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250821 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250825 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250828 | orchestrator | 2026-01-05 01:01:28.250846 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-05 01:01:28.250851 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.533) 0:03:27.568 ******** 2026-01-05 01:01:28.250855 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250859 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250863 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250867 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250870 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250874 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250882 | orchestrator | 2026-01-05 01:01:28.250886 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-05 01:01:28.250890 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:00.858) 0:03:28.427 ******** 2026-01-05 01:01:28.250894 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250897 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250901 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250905 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250909 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250912 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250916 | orchestrator | 2026-01-05 01:01:28.250920 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-05 01:01:28.250924 | orchestrator | Monday 05 January 2026 00:52:52 +0000 (0:00:00.531) 0:03:28.959 ******** 2026-01-05 01:01:28.250927 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250931 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250935 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250939 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250943 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250947 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250950 | orchestrator | 2026-01-05 01:01:28.250954 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-05 01:01:28.250958 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:00.877) 0:03:29.836 ******** 2026-01-05 01:01:28.250962 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.250965 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.250969 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.250973 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.250977 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.250981 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.250984 | orchestrator | 2026-01-05 01:01:28.250988 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-05 01:01:28.250992 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:00.685) 0:03:30.521 ******** 2026-01-05 01:01:28.250996 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251000 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251003 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251007 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251011 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251015 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251019 | orchestrator | 2026-01-05 01:01:28.251022 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-05 01:01:28.251026 | orchestrator | Monday 05 January 2026 00:52:55 +0000 (0:00:00.844) 0:03:31.366 ******** 2026-01-05 01:01:28.251030 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251034 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251040 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251044 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.251048 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.251052 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.251056 | orchestrator | 2026-01-05 01:01:28.251060 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-05 01:01:28.251063 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:03.320) 0:03:34.686 ******** 2026-01-05 01:01:28.251067 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.251071 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.251075 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.251079 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251082 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251086 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251090 | orchestrator | 2026-01-05 01:01:28.251094 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-05 01:01:28.251100 | orchestrator | Monday 05 January 2026 00:52:59 +0000 (0:00:00.924) 0:03:35.611 ******** 2026-01-05 01:01:28.251104 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.251108 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.251112 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.251115 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251119 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251123 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251127 | orchestrator | 2026-01-05 01:01:28.251131 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-05 01:01:28.251134 | orchestrator | Monday 05 January 2026 00:53:00 +0000 (0:00:00.741) 0:03:36.353 ******** 2026-01-05 01:01:28.251138 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251142 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251146 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251149 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251153 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251157 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251160 | orchestrator | 2026-01-05 01:01:28.251164 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-05 01:01:28.251168 | orchestrator | Monday 05 January 2026 00:53:01 +0000 (0:00:01.281) 0:03:37.634 ******** 2026-01-05 01:01:28.251172 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.251176 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.251179 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.251185 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251208 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251212 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251216 | orchestrator | 2026-01-05 01:01:28.251220 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-05 01:01:28.251224 | orchestrator | Monday 05 January 2026 00:53:02 +0000 (0:00:00.792) 0:03:38.427 ******** 2026-01-05 01:01:28.251229 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-05 01:01:28.251235 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-05 01:01:28.251241 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251244 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-05 01:01:28.251249 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-05 01:01:28.251252 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-05 01:01:28.251260 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-05 01:01:28.251264 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251267 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251277 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251281 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251285 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251288 | orchestrator | 2026-01-05 01:01:28.251292 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-05 01:01:28.251296 | orchestrator | Monday 05 January 2026 00:53:03 +0000 (0:00:01.040) 0:03:39.467 ******** 2026-01-05 01:01:28.251300 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251304 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251307 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251311 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251315 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251319 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251323 | orchestrator | 2026-01-05 01:01:28.251326 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-05 01:01:28.251330 | orchestrator | Monday 05 January 2026 00:53:04 +0000 (0:00:00.874) 0:03:40.342 ******** 2026-01-05 01:01:28.251334 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251338 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251342 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251346 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251349 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251353 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251357 | orchestrator | 2026-01-05 01:01:28.251361 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 01:01:28.251366 | orchestrator | Monday 05 January 2026 00:53:05 +0000 (0:00:00.931) 0:03:41.273 ******** 2026-01-05 01:01:28.251370 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251373 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251377 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251381 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251384 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251388 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251392 | orchestrator | 2026-01-05 01:01:28.251396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 01:01:28.251399 | orchestrator | Monday 05 January 2026 00:53:06 +0000 (0:00:00.781) 0:03:42.055 ******** 2026-01-05 01:01:28.251403 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251407 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251411 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251415 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251418 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251422 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251426 | orchestrator | 2026-01-05 01:01:28.251430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 01:01:28.251447 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:01.005) 0:03:43.060 ******** 2026-01-05 01:01:28.251452 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251455 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251459 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251463 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251467 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251471 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251477 | orchestrator | 2026-01-05 01:01:28.251481 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 01:01:28.251485 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:00.803) 0:03:43.864 ******** 2026-01-05 01:01:28.251489 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.251493 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.251497 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251500 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.251504 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251508 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251512 | orchestrator | 2026-01-05 01:01:28.251515 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 01:01:28.251519 | orchestrator | Monday 05 January 2026 00:53:08 +0000 (0:00:01.085) 0:03:44.949 ******** 2026-01-05 01:01:28.251523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.251527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.251531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.251534 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251538 | orchestrator | 2026-01-05 01:01:28.251542 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 01:01:28.251546 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.479) 0:03:45.429 ******** 2026-01-05 01:01:28.251549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.251553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.251557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.251561 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251564 | orchestrator | 2026-01-05 01:01:28.251568 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 01:01:28.251572 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:00.457) 0:03:45.887 ******** 2026-01-05 01:01:28.251576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.251579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.251583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.251587 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251591 | orchestrator | 2026-01-05 01:01:28.251594 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 01:01:28.251598 | orchestrator | Monday 05 January 2026 00:53:10 +0000 (0:00:00.461) 0:03:46.349 ******** 2026-01-05 01:01:28.251602 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.251606 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.251609 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.251613 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251617 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251621 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251624 | orchestrator | 2026-01-05 01:01:28.251631 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 01:01:28.251635 | orchestrator | Monday 05 January 2026 00:53:11 +0000 (0:00:00.776) 0:03:47.125 ******** 2026-01-05 01:01:28.251639 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 01:01:28.251643 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:01:28.251647 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-05 01:01:28.251651 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 01:01:28.251654 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251658 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-05 01:01:28.251662 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251666 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-05 01:01:28.251669 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251673 | orchestrator | 2026-01-05 01:01:28.251677 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-05 01:01:28.251685 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:02.416) 0:03:49.542 ******** 2026-01-05 01:01:28.251689 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.251692 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.251696 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.251700 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.251704 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.251707 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.251712 | orchestrator | 2026-01-05 01:01:28.251716 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:01:28.251719 | orchestrator | Monday 05 January 2026 00:53:16 +0000 (0:00:03.469) 0:03:53.011 ******** 2026-01-05 01:01:28.251723 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.251727 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.251730 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.251734 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.251738 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.251742 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.251745 | orchestrator | 2026-01-05 01:01:28.251749 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 01:01:28.251753 | orchestrator | Monday 05 January 2026 00:53:18 +0000 (0:00:01.284) 0:03:54.295 ******** 2026-01-05 01:01:28.251756 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251760 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251764 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251768 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.251772 | orchestrator | 2026-01-05 01:01:28.251776 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 01:01:28.251807 | orchestrator | Monday 05 January 2026 00:53:19 +0000 (0:00:01.213) 0:03:55.508 ******** 2026-01-05 01:01:28.251812 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.251816 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.251819 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.251824 | orchestrator | 2026-01-05 01:01:28.251827 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 01:01:28.251831 | orchestrator | Monday 05 January 2026 00:53:19 +0000 (0:00:00.380) 0:03:55.889 ******** 2026-01-05 01:01:28.251835 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.251839 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.251842 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.251846 | orchestrator | 2026-01-05 01:01:28.251851 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 01:01:28.251854 | orchestrator | Monday 05 January 2026 00:53:21 +0000 (0:00:01.509) 0:03:57.398 ******** 2026-01-05 01:01:28.251858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:01:28.251862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:01:28.251865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:01:28.251869 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251873 | orchestrator | 2026-01-05 01:01:28.251877 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 01:01:28.251881 | orchestrator | Monday 05 January 2026 00:53:22 +0000 (0:00:00.687) 0:03:58.086 ******** 2026-01-05 01:01:28.251884 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.251888 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.251892 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.251896 | orchestrator | 2026-01-05 01:01:28.251899 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 01:01:28.251903 | orchestrator | Monday 05 January 2026 00:53:22 +0000 (0:00:00.365) 0:03:58.451 ******** 2026-01-05 01:01:28.251907 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.251911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.251915 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.251923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.251926 | orchestrator | 2026-01-05 01:01:28.251930 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 01:01:28.251934 | orchestrator | Monday 05 January 2026 00:53:23 +0000 (0:00:01.149) 0:03:59.601 ******** 2026-01-05 01:01:28.251938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.251942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.251946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.251949 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251953 | orchestrator | 2026-01-05 01:01:28.251957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 01:01:28.251961 | orchestrator | Monday 05 January 2026 00:53:23 +0000 (0:00:00.412) 0:04:00.013 ******** 2026-01-05 01:01:28.251964 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.251978 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.251982 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.251986 | orchestrator | 2026-01-05 01:01:28.251989 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 01:01:28.251996 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.371) 0:04:00.385 ******** 2026-01-05 01:01:28.252000 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252005 | orchestrator | 2026-01-05 01:01:28.252009 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 01:01:28.252013 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.262) 0:04:00.647 ******** 2026-01-05 01:01:28.252017 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252020 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.252024 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.252028 | orchestrator | 2026-01-05 01:01:28.252032 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 01:01:28.252035 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.358) 0:04:01.005 ******** 2026-01-05 01:01:28.252039 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252043 | orchestrator | 2026-01-05 01:01:28.252047 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 01:01:28.252051 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:00.277) 0:04:01.283 ******** 2026-01-05 01:01:28.252054 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252058 | orchestrator | 2026-01-05 01:01:28.252062 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 01:01:28.252066 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:00.259) 0:04:01.542 ******** 2026-01-05 01:01:28.252070 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252073 | orchestrator | 2026-01-05 01:01:28.252077 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 01:01:28.252081 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:00.124) 0:04:01.667 ******** 2026-01-05 01:01:28.252085 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252091 | orchestrator | 2026-01-05 01:01:28.252097 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 01:01:28.252103 | orchestrator | Monday 05 January 2026 00:53:26 +0000 (0:00:00.940) 0:04:02.607 ******** 2026-01-05 01:01:28.252109 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252115 | orchestrator | 2026-01-05 01:01:28.252121 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 01:01:28.252126 | orchestrator | Monday 05 January 2026 00:53:26 +0000 (0:00:00.249) 0:04:02.857 ******** 2026-01-05 01:01:28.252140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.252147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.252153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.252165 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252171 | orchestrator | 2026-01-05 01:01:28.252177 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 01:01:28.252207 | orchestrator | Monday 05 January 2026 00:53:27 +0000 (0:00:00.500) 0:04:03.357 ******** 2026-01-05 01:01:28.252214 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252221 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.252227 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.252233 | orchestrator | 2026-01-05 01:01:28.252239 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 01:01:28.252245 | orchestrator | Monday 05 January 2026 00:53:27 +0000 (0:00:00.435) 0:04:03.793 ******** 2026-01-05 01:01:28.252251 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252257 | orchestrator | 2026-01-05 01:01:28.252263 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 01:01:28.252269 | orchestrator | Monday 05 January 2026 00:53:28 +0000 (0:00:00.299) 0:04:04.092 ******** 2026-01-05 01:01:28.252275 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252281 | orchestrator | 2026-01-05 01:01:28.252287 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 01:01:28.252293 | orchestrator | Monday 05 January 2026 00:53:28 +0000 (0:00:00.319) 0:04:04.411 ******** 2026-01-05 01:01:28.252298 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.252304 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.252310 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.252316 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.252322 | orchestrator | 2026-01-05 01:01:28.252327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 01:01:28.252333 | orchestrator | Monday 05 January 2026 00:53:29 +0000 (0:00:01.322) 0:04:05.734 ******** 2026-01-05 01:01:28.252339 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.252344 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.252350 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.252356 | orchestrator | 2026-01-05 01:01:28.252362 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 01:01:28.252368 | orchestrator | Monday 05 January 2026 00:53:30 +0000 (0:00:00.353) 0:04:06.087 ******** 2026-01-05 01:01:28.252374 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.252380 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.252386 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.252393 | orchestrator | 2026-01-05 01:01:28.252399 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 01:01:28.252404 | orchestrator | Monday 05 January 2026 00:53:31 +0000 (0:00:01.440) 0:04:07.528 ******** 2026-01-05 01:01:28.252412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.252418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.252424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.252430 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252435 | orchestrator | 2026-01-05 01:01:28.252442 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 01:01:28.252448 | orchestrator | Monday 05 January 2026 00:53:32 +0000 (0:00:00.908) 0:04:08.437 ******** 2026-01-05 01:01:28.252454 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.252460 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.252466 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.252472 | orchestrator | 2026-01-05 01:01:28.252478 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 01:01:28.252491 | orchestrator | Monday 05 January 2026 00:53:33 +0000 (0:00:00.647) 0:04:09.084 ******** 2026-01-05 01:01:28.252498 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.252504 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.252510 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.252540 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.252555 | orchestrator | 2026-01-05 01:01:28.252561 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 01:01:28.252568 | orchestrator | Monday 05 January 2026 00:53:33 +0000 (0:00:00.856) 0:04:09.940 ******** 2026-01-05 01:01:28.252575 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.252582 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.252587 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.252593 | orchestrator | 2026-01-05 01:01:28.252599 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 01:01:28.252605 | orchestrator | Monday 05 January 2026 00:53:34 +0000 (0:00:00.646) 0:04:10.587 ******** 2026-01-05 01:01:28.252612 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.252618 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.252624 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.252631 | orchestrator | 2026-01-05 01:01:28.252637 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 01:01:28.252643 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:01.463) 0:04:12.050 ******** 2026-01-05 01:01:28.252650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.252656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.252662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.252668 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252675 | orchestrator | 2026-01-05 01:01:28.252681 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 01:01:28.252687 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:00.717) 0:04:12.767 ******** 2026-01-05 01:01:28.252694 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.252700 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.252706 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.252713 | orchestrator | 2026-01-05 01:01:28.252719 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-05 01:01:28.252725 | orchestrator | Monday 05 January 2026 00:53:37 +0000 (0:00:00.391) 0:04:13.159 ******** 2026-01-05 01:01:28.252732 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252738 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.252744 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.252750 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.252757 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.252807 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.252816 | orchestrator | 2026-01-05 01:01:28.252823 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 01:01:28.252830 | orchestrator | Monday 05 January 2026 00:53:38 +0000 (0:00:01.145) 0:04:14.305 ******** 2026-01-05 01:01:28.252836 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.252843 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.252849 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.252856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.252862 | orchestrator | 2026-01-05 01:01:28.252869 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 01:01:28.252876 | orchestrator | Monday 05 January 2026 00:53:39 +0000 (0:00:01.014) 0:04:15.319 ******** 2026-01-05 01:01:28.252882 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.252889 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.252896 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.252902 | orchestrator | 2026-01-05 01:01:28.252909 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 01:01:28.252916 | orchestrator | Monday 05 January 2026 00:53:39 +0000 (0:00:00.657) 0:04:15.977 ******** 2026-01-05 01:01:28.252922 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.252934 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.252941 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.252947 | orchestrator | 2026-01-05 01:01:28.252954 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 01:01:28.252960 | orchestrator | Monday 05 January 2026 00:53:41 +0000 (0:00:01.685) 0:04:17.662 ******** 2026-01-05 01:01:28.252967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:01:28.252973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:01:28.252980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:01:28.252986 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.252993 | orchestrator | 2026-01-05 01:01:28.253000 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 01:01:28.253007 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.662) 0:04:18.324 ******** 2026-01-05 01:01:28.253013 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253020 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253026 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253033 | orchestrator | 2026-01-05 01:01:28.253040 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-05 01:01:28.253047 | orchestrator | 2026-01-05 01:01:28.253053 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.253060 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.592) 0:04:18.916 ******** 2026-01-05 01:01:28.253067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.253074 | orchestrator | 2026-01-05 01:01:28.253080 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.253087 | orchestrator | Monday 05 January 2026 00:53:43 +0000 (0:00:00.742) 0:04:19.659 ******** 2026-01-05 01:01:28.253098 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.253105 | orchestrator | 2026-01-05 01:01:28.253112 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.253118 | orchestrator | Monday 05 January 2026 00:53:44 +0000 (0:00:00.626) 0:04:20.285 ******** 2026-01-05 01:01:28.253125 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253131 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253138 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253145 | orchestrator | 2026-01-05 01:01:28.253151 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.253158 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:01.308) 0:04:21.594 ******** 2026-01-05 01:01:28.253165 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253171 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253177 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253184 | orchestrator | 2026-01-05 01:01:28.253191 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.253197 | orchestrator | Monday 05 January 2026 00:53:45 +0000 (0:00:00.297) 0:04:21.891 ******** 2026-01-05 01:01:28.253204 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253210 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253217 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253223 | orchestrator | 2026-01-05 01:01:28.253230 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.253236 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.341) 0:04:22.233 ******** 2026-01-05 01:01:28.253243 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253249 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253256 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253262 | orchestrator | 2026-01-05 01:01:28.253268 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.253275 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.389) 0:04:22.622 ******** 2026-01-05 01:01:28.253286 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253293 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253299 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253305 | orchestrator | 2026-01-05 01:01:28.253310 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.253317 | orchestrator | Monday 05 January 2026 00:53:47 +0000 (0:00:01.349) 0:04:23.972 ******** 2026-01-05 01:01:28.253323 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253329 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253334 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253341 | orchestrator | 2026-01-05 01:01:28.253347 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.253353 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:00.341) 0:04:24.314 ******** 2026-01-05 01:01:28.253383 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253391 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253397 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253403 | orchestrator | 2026-01-05 01:01:28.253409 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.253415 | orchestrator | Monday 05 January 2026 00:53:48 +0000 (0:00:00.383) 0:04:24.697 ******** 2026-01-05 01:01:28.253420 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253426 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253431 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253437 | orchestrator | 2026-01-05 01:01:28.253443 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.253451 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:00.872) 0:04:25.570 ******** 2026-01-05 01:01:28.253457 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253462 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253468 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253473 | orchestrator | 2026-01-05 01:01:28.253479 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.253485 | orchestrator | Monday 05 January 2026 00:53:50 +0000 (0:00:01.224) 0:04:26.794 ******** 2026-01-05 01:01:28.253492 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253499 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253504 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253511 | orchestrator | 2026-01-05 01:01:28.253516 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.253522 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:00.385) 0:04:27.179 ******** 2026-01-05 01:01:28.253528 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253533 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253539 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253545 | orchestrator | 2026-01-05 01:01:28.253550 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.253556 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:00.396) 0:04:27.576 ******** 2026-01-05 01:01:28.253562 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253568 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253574 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253580 | orchestrator | 2026-01-05 01:01:28.253586 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.253592 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:00.340) 0:04:27.917 ******** 2026-01-05 01:01:28.253597 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253603 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253608 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253613 | orchestrator | 2026-01-05 01:01:28.253618 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.253624 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.351) 0:04:28.268 ******** 2026-01-05 01:01:28.253630 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253642 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253648 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253654 | orchestrator | 2026-01-05 01:01:28.253660 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.253666 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.711) 0:04:28.979 ******** 2026-01-05 01:01:28.253672 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253677 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253682 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253688 | orchestrator | 2026-01-05 01:01:28.253697 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.253703 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:00.394) 0:04:29.374 ******** 2026-01-05 01:01:28.253708 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253714 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.253719 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.253725 | orchestrator | 2026-01-05 01:01:28.253731 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.253737 | orchestrator | Monday 05 January 2026 00:53:53 +0000 (0:00:00.346) 0:04:29.721 ******** 2026-01-05 01:01:28.253743 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253749 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253755 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253761 | orchestrator | 2026-01-05 01:01:28.253767 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.253773 | orchestrator | Monday 05 January 2026 00:53:54 +0000 (0:00:00.402) 0:04:30.123 ******** 2026-01-05 01:01:28.253778 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253784 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253789 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253814 | orchestrator | 2026-01-05 01:01:28.253820 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.253826 | orchestrator | Monday 05 January 2026 00:53:55 +0000 (0:00:00.936) 0:04:31.060 ******** 2026-01-05 01:01:28.253832 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253837 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253843 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253849 | orchestrator | 2026-01-05 01:01:28.253855 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-05 01:01:28.253860 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:00.992) 0:04:32.052 ******** 2026-01-05 01:01:28.253866 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.253871 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.253877 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.253883 | orchestrator | 2026-01-05 01:01:28.253889 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-05 01:01:28.253896 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:00.448) 0:04:32.500 ******** 2026-01-05 01:01:28.253902 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.253908 | orchestrator | 2026-01-05 01:01:28.253913 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-05 01:01:28.253918 | orchestrator | Monday 05 January 2026 00:53:57 +0000 (0:00:01.142) 0:04:33.643 ******** 2026-01-05 01:01:28.253924 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.253930 | orchestrator | 2026-01-05 01:01:28.253964 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-05 01:01:28.253972 | orchestrator | Monday 05 January 2026 00:53:57 +0000 (0:00:00.229) 0:04:33.873 ******** 2026-01-05 01:01:28.253978 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 01:01:28.253984 | orchestrator | 2026-01-05 01:01:28.253990 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-05 01:01:28.253996 | orchestrator | Monday 05 January 2026 00:53:59 +0000 (0:00:01.417) 0:04:35.290 ******** 2026-01-05 01:01:28.254008 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254043 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.254051 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.254058 | orchestrator | 2026-01-05 01:01:28.254065 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-05 01:01:28.254071 | orchestrator | Monday 05 January 2026 00:53:59 +0000 (0:00:00.400) 0:04:35.690 ******** 2026-01-05 01:01:28.254078 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254084 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.254090 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.254096 | orchestrator | 2026-01-05 01:01:28.254102 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-05 01:01:28.254109 | orchestrator | Monday 05 January 2026 00:54:00 +0000 (0:00:00.836) 0:04:36.527 ******** 2026-01-05 01:01:28.254116 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254122 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254128 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254134 | orchestrator | 2026-01-05 01:01:28.254139 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-05 01:01:28.254145 | orchestrator | Monday 05 January 2026 00:54:02 +0000 (0:00:01.537) 0:04:38.065 ******** 2026-01-05 01:01:28.254153 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254160 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254167 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254173 | orchestrator | 2026-01-05 01:01:28.254179 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-05 01:01:28.254185 | orchestrator | Monday 05 January 2026 00:54:03 +0000 (0:00:01.048) 0:04:39.113 ******** 2026-01-05 01:01:28.254192 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254197 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254202 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254207 | orchestrator | 2026-01-05 01:01:28.254213 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-05 01:01:28.254219 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:01.043) 0:04:40.156 ******** 2026-01-05 01:01:28.254225 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.254230 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254236 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.254241 | orchestrator | 2026-01-05 01:01:28.254247 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-05 01:01:28.254254 | orchestrator | Monday 05 January 2026 00:54:05 +0000 (0:00:00.985) 0:04:41.142 ******** 2026-01-05 01:01:28.254260 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254266 | orchestrator | 2026-01-05 01:01:28.254273 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-05 01:01:28.254279 | orchestrator | Monday 05 January 2026 00:54:07 +0000 (0:00:02.260) 0:04:43.402 ******** 2026-01-05 01:01:28.254285 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254291 | orchestrator | 2026-01-05 01:01:28.254296 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-05 01:01:28.254312 | orchestrator | Monday 05 January 2026 00:54:08 +0000 (0:00:00.772) 0:04:44.175 ******** 2026-01-05 01:01:28.254319 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.254325 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 01:01:28.254331 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.254337 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 01:01:28.254343 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:01:28.254349 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:01:28.254355 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:01:28.254361 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-05 01:01:28.254368 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-05 01:01:28.254380 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-05 01:01:28.254386 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:01:28.254392 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-05 01:01:28.254399 | orchestrator | 2026-01-05 01:01:28.254405 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-05 01:01:28.254411 | orchestrator | Monday 05 January 2026 00:54:13 +0000 (0:00:05.221) 0:04:49.397 ******** 2026-01-05 01:01:28.254417 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254423 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254430 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254436 | orchestrator | 2026-01-05 01:01:28.254442 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-05 01:01:28.254448 | orchestrator | Monday 05 January 2026 00:54:15 +0000 (0:00:01.725) 0:04:51.122 ******** 2026-01-05 01:01:28.254453 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254459 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.254465 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.254470 | orchestrator | 2026-01-05 01:01:28.254476 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-05 01:01:28.254482 | orchestrator | Monday 05 January 2026 00:54:15 +0000 (0:00:00.508) 0:04:51.631 ******** 2026-01-05 01:01:28.254488 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254494 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.254500 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.254507 | orchestrator | 2026-01-05 01:01:28.254512 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-05 01:01:28.254518 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:00.954) 0:04:52.586 ******** 2026-01-05 01:01:28.254524 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254568 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254574 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254578 | orchestrator | 2026-01-05 01:01:28.254582 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-05 01:01:28.254586 | orchestrator | Monday 05 January 2026 00:54:18 +0000 (0:00:02.127) 0:04:54.713 ******** 2026-01-05 01:01:28.254589 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254593 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254597 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254601 | orchestrator | 2026-01-05 01:01:28.254605 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-05 01:01:28.254609 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:01.663) 0:04:56.376 ******** 2026-01-05 01:01:28.254612 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.254616 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.254620 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.254624 | orchestrator | 2026-01-05 01:01:28.254627 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-05 01:01:28.254631 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:00.433) 0:04:56.810 ******** 2026-01-05 01:01:28.254635 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.254639 | orchestrator | 2026-01-05 01:01:28.254643 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-05 01:01:28.254646 | orchestrator | Monday 05 January 2026 00:54:21 +0000 (0:00:00.932) 0:04:57.742 ******** 2026-01-05 01:01:28.254650 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.254654 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.254658 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.254662 | orchestrator | 2026-01-05 01:01:28.254667 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-05 01:01:28.254674 | orchestrator | Monday 05 January 2026 00:54:22 +0000 (0:00:00.645) 0:04:58.388 ******** 2026-01-05 01:01:28.254679 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.254691 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.254697 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.254702 | orchestrator | 2026-01-05 01:01:28.254708 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-05 01:01:28.254714 | orchestrator | Monday 05 January 2026 00:54:22 +0000 (0:00:00.508) 0:04:58.897 ******** 2026-01-05 01:01:28.254721 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.254729 | orchestrator | 2026-01-05 01:01:28.254734 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-05 01:01:28.254737 | orchestrator | Monday 05 January 2026 00:54:23 +0000 (0:00:00.920) 0:04:59.817 ******** 2026-01-05 01:01:28.254741 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254745 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254749 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254752 | orchestrator | 2026-01-05 01:01:28.254756 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-05 01:01:28.254760 | orchestrator | Monday 05 January 2026 00:54:26 +0000 (0:00:02.508) 0:05:02.325 ******** 2026-01-05 01:01:28.254764 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254767 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254775 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254779 | orchestrator | 2026-01-05 01:01:28.254783 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-05 01:01:28.254787 | orchestrator | Monday 05 January 2026 00:54:27 +0000 (0:00:01.223) 0:05:03.549 ******** 2026-01-05 01:01:28.254811 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254926 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254940 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254944 | orchestrator | 2026-01-05 01:01:28.254948 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-05 01:01:28.254952 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:01.730) 0:05:05.279 ******** 2026-01-05 01:01:28.254956 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.254960 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.254964 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.254967 | orchestrator | 2026-01-05 01:01:28.254971 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-05 01:01:28.254975 | orchestrator | Monday 05 January 2026 00:54:31 +0000 (0:00:02.387) 0:05:07.667 ******** 2026-01-05 01:01:28.254979 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.254983 | orchestrator | 2026-01-05 01:01:28.254987 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-05 01:01:28.254990 | orchestrator | Monday 05 January 2026 00:54:32 +0000 (0:00:00.569) 0:05:08.237 ******** 2026-01-05 01:01:28.254994 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.254998 | orchestrator | 2026-01-05 01:01:28.255002 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-05 01:01:28.255006 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:01.322) 0:05:09.560 ******** 2026-01-05 01:01:28.255010 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255013 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255017 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255021 | orchestrator | 2026-01-05 01:01:28.255025 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-05 01:01:28.255029 | orchestrator | Monday 05 January 2026 00:54:43 +0000 (0:00:10.362) 0:05:19.922 ******** 2026-01-05 01:01:28.255033 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255036 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255040 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255044 | orchestrator | 2026-01-05 01:01:28.255048 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-05 01:01:28.255051 | orchestrator | Monday 05 January 2026 00:54:44 +0000 (0:00:00.659) 0:05:20.581 ******** 2026-01-05 01:01:28.255107 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-05 01:01:28.255114 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-05 01:01:28.255119 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-05 01:01:28.255125 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-05 01:01:28.255130 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-05 01:01:28.255139 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__bce8bf938fcba0e4805f26a217f88e288d53b198'}])  2026-01-05 01:01:28.255144 | orchestrator | 2026-01-05 01:01:28.255148 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:01:28.255152 | orchestrator | Monday 05 January 2026 00:55:00 +0000 (0:00:15.487) 0:05:36.069 ******** 2026-01-05 01:01:28.255156 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255160 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255164 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255167 | orchestrator | 2026-01-05 01:01:28.255171 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 01:01:28.255175 | orchestrator | Monday 05 January 2026 00:55:00 +0000 (0:00:00.364) 0:05:36.434 ******** 2026-01-05 01:01:28.255179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.255182 | orchestrator | 2026-01-05 01:01:28.255186 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 01:01:28.255190 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:00.931) 0:05:37.366 ******** 2026-01-05 01:01:28.255194 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255198 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255202 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255206 | orchestrator | 2026-01-05 01:01:28.255209 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 01:01:28.255213 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:00.364) 0:05:37.730 ******** 2026-01-05 01:01:28.255221 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255225 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255232 | orchestrator | 2026-01-05 01:01:28.255236 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 01:01:28.255240 | orchestrator | Monday 05 January 2026 00:55:02 +0000 (0:00:00.466) 0:05:38.196 ******** 2026-01-05 01:01:28.255244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:01:28.255248 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:01:28.255251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:01:28.255255 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255259 | orchestrator | 2026-01-05 01:01:28.255263 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 01:01:28.255266 | orchestrator | Monday 05 January 2026 00:55:03 +0000 (0:00:01.160) 0:05:39.357 ******** 2026-01-05 01:01:28.255270 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255274 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255278 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255281 | orchestrator | 2026-01-05 01:01:28.255285 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-05 01:01:28.255289 | orchestrator | 2026-01-05 01:01:28.255306 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.255311 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:01.062) 0:05:40.419 ******** 2026-01-05 01:01:28.255315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.255319 | orchestrator | 2026-01-05 01:01:28.255323 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.255327 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:00.597) 0:05:41.017 ******** 2026-01-05 01:01:28.255330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.255334 | orchestrator | 2026-01-05 01:01:28.255338 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.255342 | orchestrator | Monday 05 January 2026 00:55:05 +0000 (0:00:00.999) 0:05:42.017 ******** 2026-01-05 01:01:28.255345 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255349 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255353 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255356 | orchestrator | 2026-01-05 01:01:28.255360 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.255364 | orchestrator | Monday 05 January 2026 00:55:06 +0000 (0:00:00.758) 0:05:42.775 ******** 2026-01-05 01:01:28.255368 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255372 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255375 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255379 | orchestrator | 2026-01-05 01:01:28.255383 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.255387 | orchestrator | Monday 05 January 2026 00:55:07 +0000 (0:00:00.322) 0:05:43.098 ******** 2026-01-05 01:01:28.255390 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255394 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255398 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255401 | orchestrator | 2026-01-05 01:01:28.255405 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.255409 | orchestrator | Monday 05 January 2026 00:55:07 +0000 (0:00:00.609) 0:05:43.707 ******** 2026-01-05 01:01:28.255413 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255416 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255420 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255424 | orchestrator | 2026-01-05 01:01:28.255428 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.255439 | orchestrator | Monday 05 January 2026 00:55:08 +0000 (0:00:00.340) 0:05:44.048 ******** 2026-01-05 01:01:28.255443 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255447 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255450 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255454 | orchestrator | 2026-01-05 01:01:28.255458 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.255462 | orchestrator | Monday 05 January 2026 00:55:08 +0000 (0:00:00.784) 0:05:44.833 ******** 2026-01-05 01:01:28.255466 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255469 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255473 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255477 | orchestrator | 2026-01-05 01:01:28.255484 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.255488 | orchestrator | Monday 05 January 2026 00:55:09 +0000 (0:00:00.365) 0:05:45.198 ******** 2026-01-05 01:01:28.255491 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255495 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255499 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255502 | orchestrator | 2026-01-05 01:01:28.255506 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.255510 | orchestrator | Monday 05 January 2026 00:55:09 +0000 (0:00:00.722) 0:05:45.921 ******** 2026-01-05 01:01:28.255514 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255517 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255521 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255525 | orchestrator | 2026-01-05 01:01:28.255529 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.255532 | orchestrator | Monday 05 January 2026 00:55:10 +0000 (0:00:00.770) 0:05:46.692 ******** 2026-01-05 01:01:28.255536 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255540 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255544 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255547 | orchestrator | 2026-01-05 01:01:28.255551 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.255555 | orchestrator | Monday 05 January 2026 00:55:11 +0000 (0:00:00.769) 0:05:47.461 ******** 2026-01-05 01:01:28.255559 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255562 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255566 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255570 | orchestrator | 2026-01-05 01:01:28.255573 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.255577 | orchestrator | Monday 05 January 2026 00:55:11 +0000 (0:00:00.334) 0:05:47.795 ******** 2026-01-05 01:01:28.255581 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255585 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255588 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255592 | orchestrator | 2026-01-05 01:01:28.255596 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.255600 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:00.693) 0:05:48.489 ******** 2026-01-05 01:01:28.255603 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255607 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255611 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255615 | orchestrator | 2026-01-05 01:01:28.255618 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.255622 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:00.325) 0:05:48.814 ******** 2026-01-05 01:01:28.255626 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255629 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255648 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255653 | orchestrator | 2026-01-05 01:01:28.255657 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.255661 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:00.355) 0:05:49.170 ******** 2026-01-05 01:01:28.255668 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255672 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255675 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255679 | orchestrator | 2026-01-05 01:01:28.255683 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.255687 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:00.335) 0:05:49.505 ******** 2026-01-05 01:01:28.255690 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255694 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255698 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255702 | orchestrator | 2026-01-05 01:01:28.255705 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.255709 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:00.362) 0:05:49.868 ******** 2026-01-05 01:01:28.255713 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255717 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255720 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255724 | orchestrator | 2026-01-05 01:01:28.255728 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.255732 | orchestrator | Monday 05 January 2026 00:55:14 +0000 (0:00:00.681) 0:05:50.550 ******** 2026-01-05 01:01:28.255735 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255739 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255743 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255747 | orchestrator | 2026-01-05 01:01:28.255750 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.255754 | orchestrator | Monday 05 January 2026 00:55:14 +0000 (0:00:00.407) 0:05:50.957 ******** 2026-01-05 01:01:28.255758 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255762 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255765 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255769 | orchestrator | 2026-01-05 01:01:28.255773 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.255777 | orchestrator | Monday 05 January 2026 00:55:15 +0000 (0:00:00.384) 0:05:51.341 ******** 2026-01-05 01:01:28.255780 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255784 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255788 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255814 | orchestrator | 2026-01-05 01:01:28.255820 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-05 01:01:28.255826 | orchestrator | Monday 05 January 2026 00:55:16 +0000 (0:00:00.985) 0:05:52.327 ******** 2026-01-05 01:01:28.255832 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:01:28.255838 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:01:28.255842 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:01:28.255845 | orchestrator | 2026-01-05 01:01:28.255849 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-05 01:01:28.255853 | orchestrator | Monday 05 January 2026 00:55:17 +0000 (0:00:00.721) 0:05:53.049 ******** 2026-01-05 01:01:28.255857 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.255861 | orchestrator | 2026-01-05 01:01:28.255865 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-05 01:01:28.255897 | orchestrator | Monday 05 January 2026 00:55:17 +0000 (0:00:00.583) 0:05:53.632 ******** 2026-01-05 01:01:28.255901 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.255905 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.255909 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.255913 | orchestrator | 2026-01-05 01:01:28.255916 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-05 01:01:28.255920 | orchestrator | Monday 05 January 2026 00:55:18 +0000 (0:00:00.625) 0:05:54.258 ******** 2026-01-05 01:01:28.255928 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.255931 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.255935 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.255939 | orchestrator | 2026-01-05 01:01:28.255943 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-05 01:01:28.255946 | orchestrator | Monday 05 January 2026 00:55:18 +0000 (0:00:00.691) 0:05:54.950 ******** 2026-01-05 01:01:28.255950 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:01:28.255954 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:01:28.255958 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:01:28.255962 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-05 01:01:28.255965 | orchestrator | 2026-01-05 01:01:28.255969 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-05 01:01:28.255973 | orchestrator | Monday 05 January 2026 00:55:29 +0000 (0:00:10.686) 0:06:05.636 ******** 2026-01-05 01:01:28.255976 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.255980 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.255984 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.255988 | orchestrator | 2026-01-05 01:01:28.255991 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-05 01:01:28.255995 | orchestrator | Monday 05 January 2026 00:55:29 +0000 (0:00:00.358) 0:06:05.994 ******** 2026-01-05 01:01:28.255999 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 01:01:28.256002 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:01:28.256006 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:01:28.256010 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 01:01:28.256014 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.256017 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.256021 | orchestrator | 2026-01-05 01:01:28.256040 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:01:28.256045 | orchestrator | Monday 05 January 2026 00:55:32 +0000 (0:00:02.479) 0:06:08.474 ******** 2026-01-05 01:01:28.256049 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 01:01:28.256052 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:01:28.256056 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:01:28.256060 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:01:28.256064 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 01:01:28.256067 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 01:01:28.256071 | orchestrator | 2026-01-05 01:01:28.256075 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-05 01:01:28.256079 | orchestrator | Monday 05 January 2026 00:55:33 +0000 (0:00:01.276) 0:06:09.751 ******** 2026-01-05 01:01:28.256083 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.256086 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.256090 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.256094 | orchestrator | 2026-01-05 01:01:28.256098 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-05 01:01:28.256101 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:01.100) 0:06:10.852 ******** 2026-01-05 01:01:28.256105 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.256109 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.256112 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.256116 | orchestrator | 2026-01-05 01:01:28.256120 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-05 01:01:28.256124 | orchestrator | Monday 05 January 2026 00:55:35 +0000 (0:00:00.324) 0:06:11.176 ******** 2026-01-05 01:01:28.256127 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.256131 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.256135 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.256142 | orchestrator | 2026-01-05 01:01:28.256146 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-05 01:01:28.256150 | orchestrator | Monday 05 January 2026 00:55:35 +0000 (0:00:00.357) 0:06:11.534 ******** 2026-01-05 01:01:28.256153 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.256157 | orchestrator | 2026-01-05 01:01:28.256161 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-05 01:01:28.256165 | orchestrator | Monday 05 January 2026 00:55:36 +0000 (0:00:00.793) 0:06:12.328 ******** 2026-01-05 01:01:28.256168 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.256172 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.256176 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.256180 | orchestrator | 2026-01-05 01:01:28.256183 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-05 01:01:28.256187 | orchestrator | Monday 05 January 2026 00:55:36 +0000 (0:00:00.355) 0:06:12.683 ******** 2026-01-05 01:01:28.256191 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.256195 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.256199 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.256202 | orchestrator | 2026-01-05 01:01:28.256206 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-05 01:01:28.256210 | orchestrator | Monday 05 January 2026 00:55:37 +0000 (0:00:00.364) 0:06:13.048 ******** 2026-01-05 01:01:28.256217 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.256221 | orchestrator | 2026-01-05 01:01:28.256225 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-05 01:01:28.256228 | orchestrator | Monday 05 January 2026 00:55:37 +0000 (0:00:00.790) 0:06:13.838 ******** 2026-01-05 01:01:28.256232 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.256236 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.256240 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.256243 | orchestrator | 2026-01-05 01:01:28.256247 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-05 01:01:28.256251 | orchestrator | Monday 05 January 2026 00:55:39 +0000 (0:00:01.410) 0:06:15.248 ******** 2026-01-05 01:01:28.256255 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.256258 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.256262 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.256266 | orchestrator | 2026-01-05 01:01:28.256269 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-05 01:01:28.256273 | orchestrator | Monday 05 January 2026 00:55:40 +0000 (0:00:01.298) 0:06:16.547 ******** 2026-01-05 01:01:28.256277 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.256281 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.256285 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.256288 | orchestrator | 2026-01-05 01:01:28.256292 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-05 01:01:28.256296 | orchestrator | Monday 05 January 2026 00:55:42 +0000 (0:00:01.851) 0:06:18.398 ******** 2026-01-05 01:01:28.256300 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.256303 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.256307 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.256311 | orchestrator | 2026-01-05 01:01:28.256314 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-05 01:01:28.256318 | orchestrator | Monday 05 January 2026 00:55:45 +0000 (0:00:03.396) 0:06:21.795 ******** 2026-01-05 01:01:28.256322 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.256326 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.256330 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-05 01:01:28.256333 | orchestrator | 2026-01-05 01:01:28.256338 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-05 01:01:28.256350 | orchestrator | Monday 05 January 2026 00:55:46 +0000 (0:00:00.436) 0:06:22.231 ******** 2026-01-05 01:01:28.256355 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-05 01:01:28.256386 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-05 01:01:28.256396 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-05 01:01:28.256401 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-05 01:01:28.256407 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-05 01:01:28.256412 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-05 01:01:28.256417 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.256423 | orchestrator | 2026-01-05 01:01:28.256428 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-05 01:01:28.256434 | orchestrator | Monday 05 January 2026 00:56:22 +0000 (0:00:36.567) 0:06:58.799 ******** 2026-01-05 01:01:28.256439 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.256444 | orchestrator | 2026-01-05 01:01:28.256449 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-05 01:01:28.256455 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:01.304) 0:07:00.103 ******** 2026-01-05 01:01:28.256460 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.256465 | orchestrator | 2026-01-05 01:01:28.256470 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-05 01:01:28.256476 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:00.378) 0:07:00.482 ******** 2026-01-05 01:01:28.256482 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.256488 | orchestrator | 2026-01-05 01:01:28.256494 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-05 01:01:28.256500 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:00.173) 0:07:00.656 ******** 2026-01-05 01:01:28.256506 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-05 01:01:28.256513 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-05 01:01:28.256517 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-05 01:01:28.256523 | orchestrator | 2026-01-05 01:01:28.256529 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-05 01:01:28.256535 | orchestrator | Monday 05 January 2026 00:56:31 +0000 (0:00:06.486) 0:07:07.142 ******** 2026-01-05 01:01:28.256540 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-05 01:01:28.256546 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-05 01:01:28.256551 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-05 01:01:28.256556 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-05 01:01:28.256562 | orchestrator | 2026-01-05 01:01:28.256567 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:01:28.256572 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:05.058) 0:07:12.200 ******** 2026-01-05 01:01:28.256587 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.256593 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.256599 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.256605 | orchestrator | 2026-01-05 01:01:28.256611 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 01:01:28.256617 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:00.694) 0:07:12.894 ******** 2026-01-05 01:01:28.256624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.256635 | orchestrator | 2026-01-05 01:01:28.256641 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 01:01:28.256647 | orchestrator | Monday 05 January 2026 00:56:37 +0000 (0:00:00.805) 0:07:13.700 ******** 2026-01-05 01:01:28.256651 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.256654 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.256658 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.256664 | orchestrator | 2026-01-05 01:01:28.256670 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 01:01:28.256676 | orchestrator | Monday 05 January 2026 00:56:38 +0000 (0:00:00.351) 0:07:14.052 ******** 2026-01-05 01:01:28.256682 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.256688 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.256694 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.256701 | orchestrator | 2026-01-05 01:01:28.256707 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 01:01:28.256713 | orchestrator | Monday 05 January 2026 00:56:39 +0000 (0:00:01.145) 0:07:15.197 ******** 2026-01-05 01:01:28.256719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:01:28.256724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:01:28.256731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:01:28.256735 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.256739 | orchestrator | 2026-01-05 01:01:28.256743 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 01:01:28.256747 | orchestrator | Monday 05 January 2026 00:56:39 +0000 (0:00:00.601) 0:07:15.798 ******** 2026-01-05 01:01:28.256750 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.256754 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.256758 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.256762 | orchestrator | 2026-01-05 01:01:28.256766 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-05 01:01:28.256769 | orchestrator | 2026-01-05 01:01:28.256773 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.256777 | orchestrator | Monday 05 January 2026 00:56:40 +0000 (0:00:00.951) 0:07:16.749 ******** 2026-01-05 01:01:28.256866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.256874 | orchestrator | 2026-01-05 01:01:28.256878 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.256881 | orchestrator | Monday 05 January 2026 00:56:41 +0000 (0:00:00.548) 0:07:17.298 ******** 2026-01-05 01:01:28.256885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.256889 | orchestrator | 2026-01-05 01:01:28.256893 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.256897 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:00.792) 0:07:18.090 ******** 2026-01-05 01:01:28.256901 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.256904 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.256908 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.256912 | orchestrator | 2026-01-05 01:01:28.256916 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.256919 | orchestrator | Monday 05 January 2026 00:56:42 +0000 (0:00:00.342) 0:07:18.433 ******** 2026-01-05 01:01:28.256923 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.256927 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.256931 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.256934 | orchestrator | 2026-01-05 01:01:28.256938 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.256942 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:00.780) 0:07:19.213 ******** 2026-01-05 01:01:28.256950 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.256954 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.256958 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.256961 | orchestrator | 2026-01-05 01:01:28.256965 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.256969 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:00.735) 0:07:19.949 ******** 2026-01-05 01:01:28.256973 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.256976 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.256980 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.256984 | orchestrator | 2026-01-05 01:01:28.256987 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.256991 | orchestrator | Monday 05 January 2026 00:56:45 +0000 (0:00:01.420) 0:07:21.370 ******** 2026-01-05 01:01:28.256995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.256999 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257002 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257006 | orchestrator | 2026-01-05 01:01:28.257010 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.257014 | orchestrator | Monday 05 January 2026 00:56:45 +0000 (0:00:00.327) 0:07:21.697 ******** 2026-01-05 01:01:28.257017 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257021 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257025 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257028 | orchestrator | 2026-01-05 01:01:28.257032 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.257036 | orchestrator | Monday 05 January 2026 00:56:46 +0000 (0:00:00.355) 0:07:22.053 ******** 2026-01-05 01:01:28.257040 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257043 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257051 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257054 | orchestrator | 2026-01-05 01:01:28.257058 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.257062 | orchestrator | Monday 05 January 2026 00:56:46 +0000 (0:00:00.326) 0:07:22.379 ******** 2026-01-05 01:01:28.257066 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257069 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257073 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257077 | orchestrator | 2026-01-05 01:01:28.257081 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.257084 | orchestrator | Monday 05 January 2026 00:56:47 +0000 (0:00:01.091) 0:07:23.471 ******** 2026-01-05 01:01:28.257088 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257092 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257096 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257099 | orchestrator | 2026-01-05 01:01:28.257103 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.257107 | orchestrator | Monday 05 January 2026 00:56:48 +0000 (0:00:00.721) 0:07:24.193 ******** 2026-01-05 01:01:28.257110 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257114 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257118 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257122 | orchestrator | 2026-01-05 01:01:28.257125 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.257129 | orchestrator | Monday 05 January 2026 00:56:48 +0000 (0:00:00.313) 0:07:24.507 ******** 2026-01-05 01:01:28.257133 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257137 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257140 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257144 | orchestrator | 2026-01-05 01:01:28.257148 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.257151 | orchestrator | Monday 05 January 2026 00:56:48 +0000 (0:00:00.299) 0:07:24.806 ******** 2026-01-05 01:01:28.257155 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257159 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257166 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257170 | orchestrator | 2026-01-05 01:01:28.257174 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.257178 | orchestrator | Monday 05 January 2026 00:56:49 +0000 (0:00:00.619) 0:07:25.426 ******** 2026-01-05 01:01:28.257181 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257185 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257189 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257192 | orchestrator | 2026-01-05 01:01:28.257196 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.257200 | orchestrator | Monday 05 January 2026 00:56:49 +0000 (0:00:00.364) 0:07:25.790 ******** 2026-01-05 01:01:28.257204 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257207 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257215 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257219 | orchestrator | 2026-01-05 01:01:28.257223 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.257227 | orchestrator | Monday 05 January 2026 00:56:50 +0000 (0:00:00.368) 0:07:26.158 ******** 2026-01-05 01:01:28.257231 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257235 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257238 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257242 | orchestrator | 2026-01-05 01:01:28.257246 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.257250 | orchestrator | Monday 05 January 2026 00:56:50 +0000 (0:00:00.299) 0:07:26.457 ******** 2026-01-05 01:01:28.257253 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257257 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257261 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257265 | orchestrator | 2026-01-05 01:01:28.257268 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.257272 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:00.583) 0:07:27.040 ******** 2026-01-05 01:01:28.257276 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257280 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257284 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257287 | orchestrator | 2026-01-05 01:01:28.257291 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.257295 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:00.326) 0:07:27.367 ******** 2026-01-05 01:01:28.257299 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257302 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257306 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257310 | orchestrator | 2026-01-05 01:01:28.257314 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.257317 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:00.358) 0:07:27.726 ******** 2026-01-05 01:01:28.257321 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257325 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257329 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257332 | orchestrator | 2026-01-05 01:01:28.257336 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-05 01:01:28.257340 | orchestrator | Monday 05 January 2026 00:56:52 +0000 (0:00:00.819) 0:07:28.545 ******** 2026-01-05 01:01:28.257344 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257347 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257351 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257355 | orchestrator | 2026-01-05 01:01:28.257359 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-05 01:01:28.257362 | orchestrator | Monday 05 January 2026 00:56:52 +0000 (0:00:00.363) 0:07:28.909 ******** 2026-01-05 01:01:28.257366 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:01:28.257370 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:01:28.257377 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:01:28.257381 | orchestrator | 2026-01-05 01:01:28.257385 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-05 01:01:28.257388 | orchestrator | Monday 05 January 2026 00:56:53 +0000 (0:00:00.646) 0:07:29.556 ******** 2026-01-05 01:01:28.257395 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.257399 | orchestrator | 2026-01-05 01:01:28.257402 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-05 01:01:28.257406 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:00.519) 0:07:30.075 ******** 2026-01-05 01:01:28.257410 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257414 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257418 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257424 | orchestrator | 2026-01-05 01:01:28.257430 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-05 01:01:28.257436 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:00.605) 0:07:30.681 ******** 2026-01-05 01:01:28.257444 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257450 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257456 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257462 | orchestrator | 2026-01-05 01:01:28.257469 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-05 01:01:28.257474 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:00.312) 0:07:30.994 ******** 2026-01-05 01:01:28.257480 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257487 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257492 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257498 | orchestrator | 2026-01-05 01:01:28.257503 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-05 01:01:28.257509 | orchestrator | Monday 05 January 2026 00:56:55 +0000 (0:00:00.655) 0:07:31.649 ******** 2026-01-05 01:01:28.257515 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.257520 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.257526 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.257538 | orchestrator | 2026-01-05 01:01:28.257552 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-05 01:01:28.257569 | orchestrator | Monday 05 January 2026 00:56:55 +0000 (0:00:00.334) 0:07:31.983 ******** 2026-01-05 01:01:28.257585 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 01:01:28.257602 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 01:01:28.257619 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 01:01:28.257634 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 01:01:28.257652 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 01:01:28.257687 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 01:01:28.257705 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 01:01:28.257722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 01:01:28.257739 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 01:01:28.257756 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 01:01:28.257773 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 01:01:28.257790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 01:01:28.257831 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 01:01:28.257859 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 01:01:28.257872 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 01:01:28.257885 | orchestrator | 2026-01-05 01:01:28.257897 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-05 01:01:28.257909 | orchestrator | Monday 05 January 2026 00:57:00 +0000 (0:00:04.816) 0:07:36.799 ******** 2026-01-05 01:01:28.257922 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.257934 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.257947 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.257959 | orchestrator | 2026-01-05 01:01:28.257973 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-05 01:01:28.257985 | orchestrator | Monday 05 January 2026 00:57:01 +0000 (0:00:00.326) 0:07:37.126 ******** 2026-01-05 01:01:28.257997 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.258010 | orchestrator | 2026-01-05 01:01:28.258097 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-05 01:01:28.258114 | orchestrator | Monday 05 January 2026 00:57:01 +0000 (0:00:00.559) 0:07:37.686 ******** 2026-01-05 01:01:28.258131 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 01:01:28.258146 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 01:01:28.258161 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-05 01:01:28.258177 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-05 01:01:28.258193 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 01:01:28.258208 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-05 01:01:28.258224 | orchestrator | 2026-01-05 01:01:28.258240 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-05 01:01:28.258257 | orchestrator | Monday 05 January 2026 00:57:03 +0000 (0:00:01.397) 0:07:39.083 ******** 2026-01-05 01:01:28.258284 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.258302 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:01:28.258314 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:01:28.258320 | orchestrator | 2026-01-05 01:01:28.258326 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:01:28.258331 | orchestrator | Monday 05 January 2026 00:57:05 +0000 (0:00:02.281) 0:07:41.365 ******** 2026-01-05 01:01:28.258337 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:01:28.258343 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:01:28.258348 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.258354 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:01:28.258360 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 01:01:28.258365 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.258371 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:01:28.258376 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 01:01:28.258381 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.258386 | orchestrator | 2026-01-05 01:01:28.258392 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-05 01:01:28.258398 | orchestrator | Monday 05 January 2026 00:57:06 +0000 (0:00:01.236) 0:07:42.601 ******** 2026-01-05 01:01:28.258405 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.258411 | orchestrator | 2026-01-05 01:01:28.258416 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-05 01:01:28.258422 | orchestrator | Monday 05 January 2026 00:57:08 +0000 (0:00:02.340) 0:07:44.941 ******** 2026-01-05 01:01:28.258428 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.258442 | orchestrator | 2026-01-05 01:01:28.258448 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-05 01:01:28.258454 | orchestrator | Monday 05 January 2026 00:57:09 +0000 (0:00:00.906) 0:07:45.847 ******** 2026-01-05 01:01:28.258461 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f2726894-ebb3-5d48-8b2e-e077f444c4ac', 'data_vg': 'ceph-f2726894-ebb3-5d48-8b2e-e077f444c4ac'}) 2026-01-05 01:01:28.258470 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bd4e3544-7c7e-58ac-a4cc-590b648d75bf', 'data_vg': 'ceph-bd4e3544-7c7e-58ac-a4cc-590b648d75bf'}) 2026-01-05 01:01:28.258487 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5dd43ce6-96bd-500c-b036-3c9652e3f870', 'data_vg': 'ceph-5dd43ce6-96bd-500c-b036-3c9652e3f870'}) 2026-01-05 01:01:28.258492 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-edc09b40-6ec9-59c0-95b4-baacc31b5a92', 'data_vg': 'ceph-edc09b40-6ec9-59c0-95b4-baacc31b5a92'}) 2026-01-05 01:01:28.258496 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-35e03706-0bf5-5720-bc24-6001f60a2be0', 'data_vg': 'ceph-35e03706-0bf5-5720-bc24-6001f60a2be0'}) 2026-01-05 01:01:28.258500 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6f45f623-6f4a-59be-980f-23e900ac5d1d', 'data_vg': 'ceph-6f45f623-6f4a-59be-980f-23e900ac5d1d'}) 2026-01-05 01:01:28.258504 | orchestrator | 2026-01-05 01:01:28.258508 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-05 01:01:28.258511 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:43.530) 0:08:29.377 ******** 2026-01-05 01:01:28.258515 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258519 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258523 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.258526 | orchestrator | 2026-01-05 01:01:28.258530 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-05 01:01:28.258534 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:00.407) 0:08:29.785 ******** 2026-01-05 01:01:28.258538 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.258541 | orchestrator | 2026-01-05 01:01:28.258545 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-05 01:01:28.258549 | orchestrator | Monday 05 January 2026 00:57:54 +0000 (0:00:00.920) 0:08:30.705 ******** 2026-01-05 01:01:28.258553 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.258556 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.258560 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.258564 | orchestrator | 2026-01-05 01:01:28.258568 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-05 01:01:28.258571 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:00.768) 0:08:31.474 ******** 2026-01-05 01:01:28.258575 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.258579 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.258583 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.258586 | orchestrator | 2026-01-05 01:01:28.258590 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-05 01:01:28.258594 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:02.814) 0:08:34.288 ******** 2026-01-05 01:01:28.258598 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.258602 | orchestrator | 2026-01-05 01:01:28.258605 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-05 01:01:28.258609 | orchestrator | Monday 05 January 2026 00:57:59 +0000 (0:00:00.781) 0:08:35.070 ******** 2026-01-05 01:01:28.258613 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.258617 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.258621 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.258624 | orchestrator | 2026-01-05 01:01:28.258628 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-05 01:01:28.258641 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:01.280) 0:08:36.351 ******** 2026-01-05 01:01:28.258645 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.258649 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.258653 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.258657 | orchestrator | 2026-01-05 01:01:28.258660 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-05 01:01:28.258664 | orchestrator | Monday 05 January 2026 00:58:01 +0000 (0:00:01.228) 0:08:37.580 ******** 2026-01-05 01:01:28.258668 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.258672 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.258675 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.258679 | orchestrator | 2026-01-05 01:01:28.258683 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-05 01:01:28.258686 | orchestrator | Monday 05 January 2026 00:58:03 +0000 (0:00:01.935) 0:08:39.516 ******** 2026-01-05 01:01:28.258690 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258694 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258698 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.258701 | orchestrator | 2026-01-05 01:01:28.258705 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-05 01:01:28.258709 | orchestrator | Monday 05 January 2026 00:58:04 +0000 (0:00:00.619) 0:08:40.135 ******** 2026-01-05 01:01:28.258712 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258720 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.258724 | orchestrator | 2026-01-05 01:01:28.258727 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-05 01:01:28.258731 | orchestrator | Monday 05 January 2026 00:58:04 +0000 (0:00:00.319) 0:08:40.454 ******** 2026-01-05 01:01:28.258735 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-05 01:01:28.258738 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-01-05 01:01:28.258742 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-05 01:01:28.258746 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:01:28.258750 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-05 01:01:28.258753 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-01-05 01:01:28.258757 | orchestrator | 2026-01-05 01:01:28.258761 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-05 01:01:28.258765 | orchestrator | Monday 05 January 2026 00:58:05 +0000 (0:00:01.149) 0:08:41.603 ******** 2026-01-05 01:01:28.258768 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-05 01:01:28.258772 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-05 01:01:28.258776 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-05 01:01:28.258780 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 01:01:28.258783 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-05 01:01:28.258790 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-05 01:01:28.258816 | orchestrator | 2026-01-05 01:01:28.258820 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-05 01:01:28.258824 | orchestrator | Monday 05 January 2026 00:58:07 +0000 (0:00:02.293) 0:08:43.896 ******** 2026-01-05 01:01:28.258828 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-05 01:01:28.258831 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-05 01:01:28.258835 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-05 01:01:28.258839 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-05 01:01:28.258843 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-05 01:01:28.258846 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 01:01:28.258850 | orchestrator | 2026-01-05 01:01:28.258854 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-05 01:01:28.258858 | orchestrator | Monday 05 January 2026 00:58:12 +0000 (0:00:04.819) 0:08:48.716 ******** 2026-01-05 01:01:28.258866 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258870 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258874 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.258878 | orchestrator | 2026-01-05 01:01:28.258881 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-05 01:01:28.258885 | orchestrator | Monday 05 January 2026 00:58:15 +0000 (0:00:02.361) 0:08:51.077 ******** 2026-01-05 01:01:28.258889 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258892 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258896 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-05 01:01:28.258900 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.258904 | orchestrator | 2026-01-05 01:01:28.258908 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-05 01:01:28.258912 | orchestrator | Monday 05 January 2026 00:58:27 +0000 (0:00:12.591) 0:09:03.669 ******** 2026-01-05 01:01:28.258915 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258919 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258923 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.258927 | orchestrator | 2026-01-05 01:01:28.258930 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:01:28.258934 | orchestrator | Monday 05 January 2026 00:58:28 +0000 (0:00:01.220) 0:09:04.889 ******** 2026-01-05 01:01:28.258938 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258942 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.258945 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.258949 | orchestrator | 2026-01-05 01:01:28.258953 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 01:01:28.258957 | orchestrator | Monday 05 January 2026 00:58:29 +0000 (0:00:00.485) 0:09:05.375 ******** 2026-01-05 01:01:28.258961 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.258964 | orchestrator | 2026-01-05 01:01:28.258968 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 01:01:28.258972 | orchestrator | Monday 05 January 2026 00:58:29 +0000 (0:00:00.556) 0:09:05.931 ******** 2026-01-05 01:01:28.258979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.258983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.258987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.258991 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.258995 | orchestrator | 2026-01-05 01:01:28.258998 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 01:01:28.259002 | orchestrator | Monday 05 January 2026 00:58:31 +0000 (0:00:01.130) 0:09:07.061 ******** 2026-01-05 01:01:28.259006 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259010 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259013 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259017 | orchestrator | 2026-01-05 01:01:28.259021 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 01:01:28.259025 | orchestrator | Monday 05 January 2026 00:58:31 +0000 (0:00:00.357) 0:09:07.419 ******** 2026-01-05 01:01:28.259028 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259032 | orchestrator | 2026-01-05 01:01:28.259036 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 01:01:28.259047 | orchestrator | Monday 05 January 2026 00:58:31 +0000 (0:00:00.271) 0:09:07.691 ******** 2026-01-05 01:01:28.259051 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259055 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259059 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259063 | orchestrator | 2026-01-05 01:01:28.259066 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 01:01:28.259073 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.339) 0:09:08.031 ******** 2026-01-05 01:01:28.259077 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259081 | orchestrator | 2026-01-05 01:01:28.259085 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 01:01:28.259089 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.255) 0:09:08.286 ******** 2026-01-05 01:01:28.259092 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259096 | orchestrator | 2026-01-05 01:01:28.259100 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 01:01:28.259104 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.259) 0:09:08.546 ******** 2026-01-05 01:01:28.259107 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259111 | orchestrator | 2026-01-05 01:01:28.259115 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 01:01:28.259119 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.139) 0:09:08.685 ******** 2026-01-05 01:01:28.259122 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259126 | orchestrator | 2026-01-05 01:01:28.259133 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 01:01:28.259137 | orchestrator | Monday 05 January 2026 00:58:32 +0000 (0:00:00.268) 0:09:08.953 ******** 2026-01-05 01:01:28.259141 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259145 | orchestrator | 2026-01-05 01:01:28.259148 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 01:01:28.259152 | orchestrator | Monday 05 January 2026 00:58:33 +0000 (0:00:00.998) 0:09:09.952 ******** 2026-01-05 01:01:28.259156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.259160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.259163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.259167 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259171 | orchestrator | 2026-01-05 01:01:28.259175 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 01:01:28.259178 | orchestrator | Monday 05 January 2026 00:58:34 +0000 (0:00:00.454) 0:09:10.406 ******** 2026-01-05 01:01:28.259182 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259186 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259190 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259193 | orchestrator | 2026-01-05 01:01:28.259197 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 01:01:28.259201 | orchestrator | Monday 05 January 2026 00:58:34 +0000 (0:00:00.371) 0:09:10.778 ******** 2026-01-05 01:01:28.259205 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259208 | orchestrator | 2026-01-05 01:01:28.259212 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 01:01:28.259216 | orchestrator | Monday 05 January 2026 00:58:35 +0000 (0:00:00.273) 0:09:11.052 ******** 2026-01-05 01:01:28.259220 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259223 | orchestrator | 2026-01-05 01:01:28.259227 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-05 01:01:28.259231 | orchestrator | 2026-01-05 01:01:28.259235 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.259238 | orchestrator | Monday 05 January 2026 00:58:36 +0000 (0:00:00.992) 0:09:12.044 ******** 2026-01-05 01:01:28.259242 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.259248 | orchestrator | 2026-01-05 01:01:28.259252 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.259256 | orchestrator | Monday 05 January 2026 00:58:37 +0000 (0:00:01.218) 0:09:13.263 ******** 2026-01-05 01:01:28.259260 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.259271 | orchestrator | 2026-01-05 01:01:28.259275 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.259279 | orchestrator | Monday 05 January 2026 00:58:38 +0000 (0:00:01.063) 0:09:14.327 ******** 2026-01-05 01:01:28.259283 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259287 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259290 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259297 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259302 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259308 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259314 | orchestrator | 2026-01-05 01:01:28.259319 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.259329 | orchestrator | Monday 05 January 2026 00:58:39 +0000 (0:00:01.464) 0:09:15.791 ******** 2026-01-05 01:01:28.259339 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259344 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259350 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259355 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259361 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259366 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259372 | orchestrator | 2026-01-05 01:01:28.259378 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.259384 | orchestrator | Monday 05 January 2026 00:58:40 +0000 (0:00:00.717) 0:09:16.510 ******** 2026-01-05 01:01:28.259390 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259396 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259402 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259408 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259413 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259419 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259425 | orchestrator | 2026-01-05 01:01:28.259429 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.259433 | orchestrator | Monday 05 January 2026 00:58:41 +0000 (0:00:01.045) 0:09:17.555 ******** 2026-01-05 01:01:28.259436 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259440 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259444 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259447 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259451 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259455 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259458 | orchestrator | 2026-01-05 01:01:28.259462 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.259466 | orchestrator | Monday 05 January 2026 00:58:42 +0000 (0:00:00.741) 0:09:18.296 ******** 2026-01-05 01:01:28.259470 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259473 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259477 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259481 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259484 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259488 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259492 | orchestrator | 2026-01-05 01:01:28.259496 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.259499 | orchestrator | Monday 05 January 2026 00:58:43 +0000 (0:00:01.308) 0:09:19.605 ******** 2026-01-05 01:01:28.259503 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259507 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259515 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259519 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259522 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259526 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259530 | orchestrator | 2026-01-05 01:01:28.259533 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.259542 | orchestrator | Monday 05 January 2026 00:58:44 +0000 (0:00:00.634) 0:09:20.239 ******** 2026-01-05 01:01:28.259546 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259550 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259553 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259557 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259561 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259564 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259568 | orchestrator | 2026-01-05 01:01:28.259572 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.259576 | orchestrator | Monday 05 January 2026 00:58:45 +0000 (0:00:00.975) 0:09:21.214 ******** 2026-01-05 01:01:28.259579 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259583 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259587 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259590 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259594 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259598 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259601 | orchestrator | 2026-01-05 01:01:28.259605 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.259609 | orchestrator | Monday 05 January 2026 00:58:46 +0000 (0:00:01.208) 0:09:22.423 ******** 2026-01-05 01:01:28.259613 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259616 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259620 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259624 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259627 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259631 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259635 | orchestrator | 2026-01-05 01:01:28.259639 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.259642 | orchestrator | Monday 05 January 2026 00:58:47 +0000 (0:00:01.563) 0:09:23.986 ******** 2026-01-05 01:01:28.259646 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259650 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259653 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259657 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259661 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259665 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259668 | orchestrator | 2026-01-05 01:01:28.259672 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.259676 | orchestrator | Monday 05 January 2026 00:58:48 +0000 (0:00:00.685) 0:09:24.672 ******** 2026-01-05 01:01:28.259680 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259683 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259687 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259691 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259694 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259698 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259702 | orchestrator | 2026-01-05 01:01:28.259706 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.259709 | orchestrator | Monday 05 January 2026 00:58:49 +0000 (0:00:01.132) 0:09:25.804 ******** 2026-01-05 01:01:28.259713 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259717 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259720 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259727 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259731 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259735 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259739 | orchestrator | 2026-01-05 01:01:28.259742 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.259746 | orchestrator | Monday 05 January 2026 00:58:50 +0000 (0:00:00.652) 0:09:26.457 ******** 2026-01-05 01:01:28.259750 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259754 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259757 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259765 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259769 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259772 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259776 | orchestrator | 2026-01-05 01:01:28.259780 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.259783 | orchestrator | Monday 05 January 2026 00:58:51 +0000 (0:00:00.912) 0:09:27.370 ******** 2026-01-05 01:01:28.259787 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259814 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259819 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259823 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259826 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259830 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259834 | orchestrator | 2026-01-05 01:01:28.259838 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.259841 | orchestrator | Monday 05 January 2026 00:58:51 +0000 (0:00:00.615) 0:09:27.985 ******** 2026-01-05 01:01:28.259845 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259849 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259852 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259856 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259860 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259864 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259867 | orchestrator | 2026-01-05 01:01:28.259871 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.259875 | orchestrator | Monday 05 January 2026 00:58:52 +0000 (0:00:01.008) 0:09:28.993 ******** 2026-01-05 01:01:28.259879 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259882 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259886 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259890 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:28.259894 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:28.259897 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:28.259901 | orchestrator | 2026-01-05 01:01:28.259905 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.259909 | orchestrator | Monday 05 January 2026 00:58:53 +0000 (0:00:00.666) 0:09:29.660 ******** 2026-01-05 01:01:28.259913 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.259920 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.259924 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.259927 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259931 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259935 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259939 | orchestrator | 2026-01-05 01:01:28.259942 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.259946 | orchestrator | Monday 05 January 2026 00:58:54 +0000 (0:00:00.888) 0:09:30.549 ******** 2026-01-05 01:01:28.259950 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259954 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259958 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259961 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259965 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.259969 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.259973 | orchestrator | 2026-01-05 01:01:28.259976 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.259980 | orchestrator | Monday 05 January 2026 00:58:55 +0000 (0:00:00.692) 0:09:31.241 ******** 2026-01-05 01:01:28.259984 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.259988 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.259991 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.259995 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.259999 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.260003 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.260006 | orchestrator | 2026-01-05 01:01:28.260010 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-05 01:01:28.260017 | orchestrator | Monday 05 January 2026 00:58:56 +0000 (0:00:01.347) 0:09:32.589 ******** 2026-01-05 01:01:28.260021 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.260025 | orchestrator | 2026-01-05 01:01:28.260028 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-05 01:01:28.260032 | orchestrator | Monday 05 January 2026 00:59:00 +0000 (0:00:04.208) 0:09:36.797 ******** 2026-01-05 01:01:28.260036 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.260040 | orchestrator | 2026-01-05 01:01:28.260044 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-05 01:01:28.260047 | orchestrator | Monday 05 January 2026 00:59:03 +0000 (0:00:02.499) 0:09:39.297 ******** 2026-01-05 01:01:28.260051 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.260055 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.260059 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.260062 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.260066 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.260070 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.260074 | orchestrator | 2026-01-05 01:01:28.260078 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-05 01:01:28.260081 | orchestrator | Monday 05 January 2026 00:59:05 +0000 (0:00:01.861) 0:09:41.158 ******** 2026-01-05 01:01:28.260085 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.260089 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.260093 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.260097 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.260100 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.260104 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.260108 | orchestrator | 2026-01-05 01:01:28.260112 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-05 01:01:28.260119 | orchestrator | Monday 05 January 2026 00:59:06 +0000 (0:00:01.134) 0:09:42.293 ******** 2026-01-05 01:01:28.260123 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.260128 | orchestrator | 2026-01-05 01:01:28.260132 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-05 01:01:28.260136 | orchestrator | Monday 05 January 2026 00:59:07 +0000 (0:00:01.407) 0:09:43.700 ******** 2026-01-05 01:01:28.260140 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.260143 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.260147 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.260151 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.260155 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.260158 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.260162 | orchestrator | 2026-01-05 01:01:28.260166 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-05 01:01:28.260170 | orchestrator | Monday 05 January 2026 00:59:09 +0000 (0:00:01.910) 0:09:45.610 ******** 2026-01-05 01:01:28.260173 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.260177 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.260181 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.260185 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.260188 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.260192 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.260196 | orchestrator | 2026-01-05 01:01:28.260200 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-05 01:01:28.260204 | orchestrator | Monday 05 January 2026 00:59:13 +0000 (0:00:03.775) 0:09:49.385 ******** 2026-01-05 01:01:28.260208 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:28.260212 | orchestrator | 2026-01-05 01:01:28.260219 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-05 01:01:28.260223 | orchestrator | Monday 05 January 2026 00:59:14 +0000 (0:00:01.401) 0:09:50.787 ******** 2026-01-05 01:01:28.260226 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260230 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260234 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260238 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.260242 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.260245 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.260249 | orchestrator | 2026-01-05 01:01:28.260253 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-05 01:01:28.260257 | orchestrator | Monday 05 January 2026 00:59:15 +0000 (0:00:00.950) 0:09:51.738 ******** 2026-01-05 01:01:28.260261 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.260267 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.260271 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.260275 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:28.260279 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:28.260282 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:28.260286 | orchestrator | 2026-01-05 01:01:28.260290 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-05 01:01:28.260294 | orchestrator | Monday 05 January 2026 00:59:17 +0000 (0:00:02.278) 0:09:54.016 ******** 2026-01-05 01:01:28.260298 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260301 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260305 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260309 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:28.260313 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:28.260317 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:28.260320 | orchestrator | 2026-01-05 01:01:28.260324 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-05 01:01:28.260328 | orchestrator | 2026-01-05 01:01:28.260332 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.260336 | orchestrator | Monday 05 January 2026 00:59:19 +0000 (0:00:01.280) 0:09:55.297 ******** 2026-01-05 01:01:28.260340 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.260344 | orchestrator | 2026-01-05 01:01:28.260347 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.260351 | orchestrator | Monday 05 January 2026 00:59:19 +0000 (0:00:00.471) 0:09:55.769 ******** 2026-01-05 01:01:28.260355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.260359 | orchestrator | 2026-01-05 01:01:28.260363 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.260366 | orchestrator | Monday 05 January 2026 00:59:20 +0000 (0:00:00.671) 0:09:56.440 ******** 2026-01-05 01:01:28.260370 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260374 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260378 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260382 | orchestrator | 2026-01-05 01:01:28.260385 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.260389 | orchestrator | Monday 05 January 2026 00:59:20 +0000 (0:00:00.312) 0:09:56.753 ******** 2026-01-05 01:01:28.260393 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260397 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260401 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260404 | orchestrator | 2026-01-05 01:01:28.260408 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.260412 | orchestrator | Monday 05 January 2026 00:59:21 +0000 (0:00:00.684) 0:09:57.438 ******** 2026-01-05 01:01:28.260416 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260420 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260423 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260430 | orchestrator | 2026-01-05 01:01:28.260434 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.260438 | orchestrator | Monday 05 January 2026 00:59:22 +0000 (0:00:01.033) 0:09:58.471 ******** 2026-01-05 01:01:28.260442 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260445 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260452 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260456 | orchestrator | 2026-01-05 01:01:28.260460 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.260464 | orchestrator | Monday 05 January 2026 00:59:23 +0000 (0:00:00.686) 0:09:59.158 ******** 2026-01-05 01:01:28.260467 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260471 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260475 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260479 | orchestrator | 2026-01-05 01:01:28.260483 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.260486 | orchestrator | Monday 05 January 2026 00:59:23 +0000 (0:00:00.315) 0:09:59.474 ******** 2026-01-05 01:01:28.260490 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260494 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260498 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260501 | orchestrator | 2026-01-05 01:01:28.260505 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.260509 | orchestrator | Monday 05 January 2026 00:59:23 +0000 (0:00:00.306) 0:09:59.780 ******** 2026-01-05 01:01:28.260513 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260517 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260520 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260524 | orchestrator | 2026-01-05 01:01:28.260528 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.260532 | orchestrator | Monday 05 January 2026 00:59:24 +0000 (0:00:00.645) 0:10:00.426 ******** 2026-01-05 01:01:28.260536 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260539 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260543 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260547 | orchestrator | 2026-01-05 01:01:28.260551 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.260555 | orchestrator | Monday 05 January 2026 00:59:25 +0000 (0:00:00.739) 0:10:01.165 ******** 2026-01-05 01:01:28.260558 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260562 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260566 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260570 | orchestrator | 2026-01-05 01:01:28.260574 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.260578 | orchestrator | Monday 05 January 2026 00:59:25 +0000 (0:00:00.844) 0:10:02.009 ******** 2026-01-05 01:01:28.260581 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260585 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260589 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260593 | orchestrator | 2026-01-05 01:01:28.260597 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.260600 | orchestrator | Monday 05 January 2026 00:59:26 +0000 (0:00:00.357) 0:10:02.367 ******** 2026-01-05 01:01:28.260604 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260611 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260615 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260619 | orchestrator | 2026-01-05 01:01:28.260622 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.260626 | orchestrator | Monday 05 January 2026 00:59:27 +0000 (0:00:00.930) 0:10:03.297 ******** 2026-01-05 01:01:28.260630 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260634 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260637 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260641 | orchestrator | 2026-01-05 01:01:28.260645 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.260652 | orchestrator | Monday 05 January 2026 00:59:27 +0000 (0:00:00.417) 0:10:03.715 ******** 2026-01-05 01:01:28.260656 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260660 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260663 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260667 | orchestrator | 2026-01-05 01:01:28.260671 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.260675 | orchestrator | Monday 05 January 2026 00:59:28 +0000 (0:00:00.356) 0:10:04.071 ******** 2026-01-05 01:01:28.260679 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260682 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260686 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260690 | orchestrator | 2026-01-05 01:01:28.260694 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.260698 | orchestrator | Monday 05 January 2026 00:59:28 +0000 (0:00:00.355) 0:10:04.427 ******** 2026-01-05 01:01:28.260701 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260705 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260709 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260713 | orchestrator | 2026-01-05 01:01:28.260716 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.260720 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.674) 0:10:05.101 ******** 2026-01-05 01:01:28.260724 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260728 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260732 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260735 | orchestrator | 2026-01-05 01:01:28.260739 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.260743 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.334) 0:10:05.436 ******** 2026-01-05 01:01:28.260747 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260750 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260754 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260758 | orchestrator | 2026-01-05 01:01:28.260762 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.260766 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:00.340) 0:10:05.777 ******** 2026-01-05 01:01:28.260769 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260773 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260777 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260781 | orchestrator | 2026-01-05 01:01:28.260785 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.260788 | orchestrator | Monday 05 January 2026 00:59:30 +0000 (0:00:00.335) 0:10:06.113 ******** 2026-01-05 01:01:28.260810 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.260814 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.260818 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.260822 | orchestrator | 2026-01-05 01:01:28.260829 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-05 01:01:28.260833 | orchestrator | Monday 05 January 2026 00:59:31 +0000 (0:00:00.913) 0:10:07.026 ******** 2026-01-05 01:01:28.260836 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.260840 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.260844 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-05 01:01:28.260848 | orchestrator | 2026-01-05 01:01:28.260851 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-05 01:01:28.260855 | orchestrator | Monday 05 January 2026 00:59:31 +0000 (0:00:00.446) 0:10:07.473 ******** 2026-01-05 01:01:28.260859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.260863 | orchestrator | 2026-01-05 01:01:28.260866 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-05 01:01:28.260870 | orchestrator | Monday 05 January 2026 00:59:33 +0000 (0:00:02.192) 0:10:09.666 ******** 2026-01-05 01:01:28.260880 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-05 01:01:28.260886 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.260889 | orchestrator | 2026-01-05 01:01:28.260893 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-05 01:01:28.260897 | orchestrator | Monday 05 January 2026 00:59:33 +0000 (0:00:00.218) 0:10:09.885 ******** 2026-01-05 01:01:28.260903 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:01:28.260913 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:01:28.260917 | orchestrator | 2026-01-05 01:01:28.260920 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-05 01:01:28.260924 | orchestrator | Monday 05 January 2026 00:59:42 +0000 (0:00:08.634) 0:10:18.519 ******** 2026-01-05 01:01:28.260931 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:01:28.260935 | orchestrator | 2026-01-05 01:01:28.260939 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-05 01:01:28.260942 | orchestrator | Monday 05 January 2026 00:59:47 +0000 (0:00:04.669) 0:10:23.188 ******** 2026-01-05 01:01:28.260946 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.260950 | orchestrator | 2026-01-05 01:01:28.260954 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-05 01:01:28.260958 | orchestrator | Monday 05 January 2026 00:59:48 +0000 (0:00:00.858) 0:10:24.047 ******** 2026-01-05 01:01:28.260961 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 01:01:28.260965 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 01:01:28.260969 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 01:01:28.260973 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-05 01:01:28.260977 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-05 01:01:28.260980 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-05 01:01:28.260984 | orchestrator | 2026-01-05 01:01:28.260988 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-05 01:01:28.260992 | orchestrator | Monday 05 January 2026 00:59:49 +0000 (0:00:01.387) 0:10:25.434 ******** 2026-01-05 01:01:28.260996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.260999 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:01:28.261003 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:01:28.261007 | orchestrator | 2026-01-05 01:01:28.261011 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:01:28.261015 | orchestrator | Monday 05 January 2026 00:59:52 +0000 (0:00:03.239) 0:10:28.674 ******** 2026-01-05 01:01:28.261018 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:01:28.261025 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:01:28.261032 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261038 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:01:28.261044 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 01:01:28.261050 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261061 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:01:28.261066 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 01:01:28.261072 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261078 | orchestrator | 2026-01-05 01:01:28.261085 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-05 01:01:28.261091 | orchestrator | Monday 05 January 2026 00:59:54 +0000 (0:00:02.161) 0:10:30.835 ******** 2026-01-05 01:01:28.261097 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261103 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261108 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261114 | orchestrator | 2026-01-05 01:01:28.261124 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-05 01:01:28.261131 | orchestrator | Monday 05 January 2026 00:59:57 +0000 (0:00:02.534) 0:10:33.370 ******** 2026-01-05 01:01:28.261136 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261143 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261149 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261155 | orchestrator | 2026-01-05 01:01:28.261162 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-05 01:01:28.261168 | orchestrator | Monday 05 January 2026 00:59:57 +0000 (0:00:00.530) 0:10:33.901 ******** 2026-01-05 01:01:28.261175 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.261181 | orchestrator | 2026-01-05 01:01:28.261187 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-05 01:01:28.261194 | orchestrator | Monday 05 January 2026 00:59:59 +0000 (0:00:01.442) 0:10:35.343 ******** 2026-01-05 01:01:28.261200 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.261206 | orchestrator | 2026-01-05 01:01:28.261212 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-05 01:01:28.261219 | orchestrator | Monday 05 January 2026 01:00:00 +0000 (0:00:00.983) 0:10:36.327 ******** 2026-01-05 01:01:28.261223 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261227 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261230 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261234 | orchestrator | 2026-01-05 01:01:28.261238 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-05 01:01:28.261241 | orchestrator | Monday 05 January 2026 01:00:01 +0000 (0:00:01.295) 0:10:37.622 ******** 2026-01-05 01:01:28.261245 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261249 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261252 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261256 | orchestrator | 2026-01-05 01:01:28.261260 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-05 01:01:28.261263 | orchestrator | Monday 05 January 2026 01:00:03 +0000 (0:00:01.605) 0:10:39.227 ******** 2026-01-05 01:01:28.261267 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261271 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261275 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261278 | orchestrator | 2026-01-05 01:01:28.261282 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-05 01:01:28.261286 | orchestrator | Monday 05 January 2026 01:00:05 +0000 (0:00:02.318) 0:10:41.546 ******** 2026-01-05 01:01:28.261290 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261298 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261302 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261305 | orchestrator | 2026-01-05 01:01:28.261309 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-05 01:01:28.261313 | orchestrator | Monday 05 January 2026 01:00:07 +0000 (0:00:02.360) 0:10:43.907 ******** 2026-01-05 01:01:28.261316 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261320 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261331 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261335 | orchestrator | 2026-01-05 01:01:28.261339 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:01:28.261343 | orchestrator | Monday 05 January 2026 01:00:09 +0000 (0:00:01.562) 0:10:45.469 ******** 2026-01-05 01:01:28.261346 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261350 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261354 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261358 | orchestrator | 2026-01-05 01:01:28.261361 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 01:01:28.261365 | orchestrator | Monday 05 January 2026 01:00:10 +0000 (0:00:00.681) 0:10:46.150 ******** 2026-01-05 01:01:28.261369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.261373 | orchestrator | 2026-01-05 01:01:28.261376 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 01:01:28.261380 | orchestrator | Monday 05 January 2026 01:00:10 +0000 (0:00:00.842) 0:10:46.993 ******** 2026-01-05 01:01:28.261384 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261388 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261391 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261395 | orchestrator | 2026-01-05 01:01:28.261399 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 01:01:28.261403 | orchestrator | Monday 05 January 2026 01:00:11 +0000 (0:00:00.389) 0:10:47.383 ******** 2026-01-05 01:01:28.261406 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.261410 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.261414 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.261418 | orchestrator | 2026-01-05 01:01:28.261421 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 01:01:28.261425 | orchestrator | Monday 05 January 2026 01:00:12 +0000 (0:00:01.149) 0:10:48.532 ******** 2026-01-05 01:01:28.261429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.261433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.261436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.261440 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261444 | orchestrator | 2026-01-05 01:01:28.261447 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 01:01:28.261451 | orchestrator | Monday 05 January 2026 01:00:13 +0000 (0:00:01.001) 0:10:49.533 ******** 2026-01-05 01:01:28.261455 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261459 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261462 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261466 | orchestrator | 2026-01-05 01:01:28.261470 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-05 01:01:28.261474 | orchestrator | 2026-01-05 01:01:28.261478 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:01:28.261484 | orchestrator | Monday 05 January 2026 01:00:14 +0000 (0:00:00.903) 0:10:50.436 ******** 2026-01-05 01:01:28.261488 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.261492 | orchestrator | 2026-01-05 01:01:28.261496 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:01:28.261500 | orchestrator | Monday 05 January 2026 01:00:14 +0000 (0:00:00.512) 0:10:50.949 ******** 2026-01-05 01:01:28.261503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.261507 | orchestrator | 2026-01-05 01:01:28.261511 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:01:28.261515 | orchestrator | Monday 05 January 2026 01:00:15 +0000 (0:00:00.827) 0:10:51.776 ******** 2026-01-05 01:01:28.261518 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261525 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261529 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261533 | orchestrator | 2026-01-05 01:01:28.261536 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:01:28.261540 | orchestrator | Monday 05 January 2026 01:00:16 +0000 (0:00:00.330) 0:10:52.107 ******** 2026-01-05 01:01:28.261544 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261548 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261551 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261555 | orchestrator | 2026-01-05 01:01:28.261559 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:01:28.261562 | orchestrator | Monday 05 January 2026 01:00:16 +0000 (0:00:00.751) 0:10:52.858 ******** 2026-01-05 01:01:28.261566 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261570 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261574 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261577 | orchestrator | 2026-01-05 01:01:28.261581 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:01:28.261585 | orchestrator | Monday 05 January 2026 01:00:17 +0000 (0:00:00.995) 0:10:53.854 ******** 2026-01-05 01:01:28.261588 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261592 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261596 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261600 | orchestrator | 2026-01-05 01:01:28.261603 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:01:28.261607 | orchestrator | Monday 05 January 2026 01:00:18 +0000 (0:00:00.800) 0:10:54.654 ******** 2026-01-05 01:01:28.261611 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261615 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261618 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261622 | orchestrator | 2026-01-05 01:01:28.261628 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:01:28.261632 | orchestrator | Monday 05 January 2026 01:00:18 +0000 (0:00:00.323) 0:10:54.978 ******** 2026-01-05 01:01:28.261636 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261639 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261643 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261647 | orchestrator | 2026-01-05 01:01:28.261651 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:01:28.261654 | orchestrator | Monday 05 January 2026 01:00:19 +0000 (0:00:00.377) 0:10:55.355 ******** 2026-01-05 01:01:28.261658 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261662 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261666 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261669 | orchestrator | 2026-01-05 01:01:28.261673 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:01:28.261677 | orchestrator | Monday 05 January 2026 01:00:19 +0000 (0:00:00.632) 0:10:55.988 ******** 2026-01-05 01:01:28.261680 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261684 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261688 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261692 | orchestrator | 2026-01-05 01:01:28.261695 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:01:28.261699 | orchestrator | Monday 05 January 2026 01:00:20 +0000 (0:00:00.784) 0:10:56.773 ******** 2026-01-05 01:01:28.261703 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261707 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261710 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261714 | orchestrator | 2026-01-05 01:01:28.261718 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:01:28.261721 | orchestrator | Monday 05 January 2026 01:00:21 +0000 (0:00:00.749) 0:10:57.522 ******** 2026-01-05 01:01:28.261725 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261729 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261733 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261740 | orchestrator | 2026-01-05 01:01:28.261744 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:01:28.261748 | orchestrator | Monday 05 January 2026 01:00:21 +0000 (0:00:00.330) 0:10:57.852 ******** 2026-01-05 01:01:28.261751 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261755 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261759 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261763 | orchestrator | 2026-01-05 01:01:28.261766 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:01:28.261770 | orchestrator | Monday 05 January 2026 01:00:22 +0000 (0:00:00.591) 0:10:58.444 ******** 2026-01-05 01:01:28.261774 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261777 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261781 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261785 | orchestrator | 2026-01-05 01:01:28.261789 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:01:28.261811 | orchestrator | Monday 05 January 2026 01:00:22 +0000 (0:00:00.361) 0:10:58.806 ******** 2026-01-05 01:01:28.261815 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261818 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261822 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261826 | orchestrator | 2026-01-05 01:01:28.261830 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:01:28.261836 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:00.390) 0:10:59.196 ******** 2026-01-05 01:01:28.261840 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261844 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261847 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261851 | orchestrator | 2026-01-05 01:01:28.261855 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:01:28.261859 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:00.347) 0:10:59.544 ******** 2026-01-05 01:01:28.261862 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261866 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261870 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261873 | orchestrator | 2026-01-05 01:01:28.261877 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:01:28.261881 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:00.635) 0:11:00.179 ******** 2026-01-05 01:01:28.261885 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261888 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261892 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261896 | orchestrator | 2026-01-05 01:01:28.261900 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:01:28.261903 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:00.345) 0:11:00.525 ******** 2026-01-05 01:01:28.261907 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.261911 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.261914 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.261918 | orchestrator | 2026-01-05 01:01:28.261922 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:01:28.261925 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:00.344) 0:11:00.869 ******** 2026-01-05 01:01:28.261929 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261933 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261937 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261940 | orchestrator | 2026-01-05 01:01:28.261944 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:01:28.261948 | orchestrator | Monday 05 January 2026 01:00:25 +0000 (0:00:00.367) 0:11:01.237 ******** 2026-01-05 01:01:28.261951 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.261955 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.261959 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.261962 | orchestrator | 2026-01-05 01:01:28.261966 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-05 01:01:28.261974 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:00.880) 0:11:02.117 ******** 2026-01-05 01:01:28.261977 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.261981 | orchestrator | 2026-01-05 01:01:28.261985 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 01:01:28.261992 | orchestrator | Monday 05 January 2026 01:00:26 +0000 (0:00:00.573) 0:11:02.691 ******** 2026-01-05 01:01:28.261996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262000 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:01:28.262004 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:01:28.262007 | orchestrator | 2026-01-05 01:01:28.262040 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:01:28.262045 | orchestrator | Monday 05 January 2026 01:00:29 +0000 (0:00:02.532) 0:11:05.224 ******** 2026-01-05 01:01:28.262048 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:01:28.262052 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:01:28.262056 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.262060 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:01:28.262063 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 01:01:28.262067 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.262071 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:01:28.262075 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 01:01:28.262078 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.262082 | orchestrator | 2026-01-05 01:01:28.262086 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-05 01:01:28.262090 | orchestrator | Monday 05 January 2026 01:00:30 +0000 (0:00:01.443) 0:11:06.667 ******** 2026-01-05 01:01:28.262093 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262097 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.262101 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.262104 | orchestrator | 2026-01-05 01:01:28.262108 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-05 01:01:28.262112 | orchestrator | Monday 05 January 2026 01:00:30 +0000 (0:00:00.319) 0:11:06.986 ******** 2026-01-05 01:01:28.262116 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.262120 | orchestrator | 2026-01-05 01:01:28.262123 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-05 01:01:28.262127 | orchestrator | Monday 05 January 2026 01:00:31 +0000 (0:00:00.548) 0:11:07.535 ******** 2026-01-05 01:01:28.262131 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.262135 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.262139 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.262143 | orchestrator | 2026-01-05 01:01:28.262146 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-05 01:01:28.262150 | orchestrator | Monday 05 January 2026 01:00:32 +0000 (0:00:01.469) 0:11:09.004 ******** 2026-01-05 01:01:28.262156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262160 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 01:01:28.262164 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262172 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 01:01:28.262176 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262180 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 01:01:28.262184 | orchestrator | 2026-01-05 01:01:28.262187 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 01:01:28.262191 | orchestrator | Monday 05 January 2026 01:00:37 +0000 (0:00:04.882) 0:11:13.887 ******** 2026-01-05 01:01:28.262195 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262199 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:01:28.262203 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262206 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:01:28.262210 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:01:28.262214 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:01:28.262218 | orchestrator | 2026-01-05 01:01:28.262221 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:01:28.262225 | orchestrator | Monday 05 January 2026 01:00:40 +0000 (0:00:02.458) 0:11:16.345 ******** 2026-01-05 01:01:28.262229 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:01:28.262232 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.262236 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:01:28.262240 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.262244 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:01:28.262247 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.262251 | orchestrator | 2026-01-05 01:01:28.262255 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-05 01:01:28.262262 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:01.227) 0:11:17.573 ******** 2026-01-05 01:01:28.262266 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-05 01:01:28.262270 | orchestrator | 2026-01-05 01:01:28.262273 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-05 01:01:28.262277 | orchestrator | Monday 05 January 2026 01:00:41 +0000 (0:00:00.241) 0:11:17.814 ******** 2026-01-05 01:01:28.262281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262301 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262305 | orchestrator | 2026-01-05 01:01:28.262311 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-05 01:01:28.262317 | orchestrator | Monday 05 January 2026 01:00:43 +0000 (0:00:01.299) 0:11:19.114 ******** 2026-01-05 01:01:28.262323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:01:28.262363 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262369 | orchestrator | 2026-01-05 01:01:28.262375 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-05 01:01:28.262382 | orchestrator | Monday 05 January 2026 01:00:43 +0000 (0:00:00.630) 0:11:19.744 ******** 2026-01-05 01:01:28.262389 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:01:28.262399 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:01:28.262406 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:01:28.262413 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:01:28.262419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:01:28.262426 | orchestrator | 2026-01-05 01:01:28.262432 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-05 01:01:28.262439 | orchestrator | Monday 05 January 2026 01:01:15 +0000 (0:00:31.417) 0:11:51.162 ******** 2026-01-05 01:01:28.262445 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262451 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.262455 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.262458 | orchestrator | 2026-01-05 01:01:28.262462 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-05 01:01:28.262466 | orchestrator | Monday 05 January 2026 01:01:15 +0000 (0:00:00.302) 0:11:51.465 ******** 2026-01-05 01:01:28.262470 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262474 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.262477 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.262481 | orchestrator | 2026-01-05 01:01:28.262485 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-05 01:01:28.262489 | orchestrator | Monday 05 January 2026 01:01:15 +0000 (0:00:00.272) 0:11:51.737 ******** 2026-01-05 01:01:28.262492 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.262496 | orchestrator | 2026-01-05 01:01:28.262500 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-05 01:01:28.262504 | orchestrator | Monday 05 January 2026 01:01:16 +0000 (0:00:00.678) 0:11:52.415 ******** 2026-01-05 01:01:28.262508 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.262511 | orchestrator | 2026-01-05 01:01:28.262520 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-05 01:01:28.262524 | orchestrator | Monday 05 January 2026 01:01:16 +0000 (0:00:00.542) 0:11:52.958 ******** 2026-01-05 01:01:28.262528 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.262531 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.262535 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.262539 | orchestrator | 2026-01-05 01:01:28.262543 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-05 01:01:28.262552 | orchestrator | Monday 05 January 2026 01:01:18 +0000 (0:00:01.218) 0:11:54.177 ******** 2026-01-05 01:01:28.262556 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.262559 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.262563 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.262567 | orchestrator | 2026-01-05 01:01:28.262571 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-05 01:01:28.262575 | orchestrator | Monday 05 January 2026 01:01:19 +0000 (0:00:01.475) 0:11:55.652 ******** 2026-01-05 01:01:28.262578 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:01:28.262582 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:01:28.262586 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:01:28.262590 | orchestrator | 2026-01-05 01:01:28.262594 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-05 01:01:28.262597 | orchestrator | Monday 05 January 2026 01:01:21 +0000 (0:00:02.094) 0:11:57.747 ******** 2026-01-05 01:01:28.262601 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.262605 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.262609 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:01:28.262613 | orchestrator | 2026-01-05 01:01:28.262617 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:01:28.262620 | orchestrator | Monday 05 January 2026 01:01:24 +0000 (0:00:02.695) 0:12:00.442 ******** 2026-01-05 01:01:28.262624 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262628 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.262632 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.262635 | orchestrator | 2026-01-05 01:01:28.262639 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 01:01:28.262643 | orchestrator | Monday 05 January 2026 01:01:24 +0000 (0:00:00.321) 0:12:00.764 ******** 2026-01-05 01:01:28.262647 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:01:28.262651 | orchestrator | 2026-01-05 01:01:28.262654 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 01:01:28.262658 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:00.508) 0:12:01.272 ******** 2026-01-05 01:01:28.262662 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.262666 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.262669 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.262673 | orchestrator | 2026-01-05 01:01:28.262677 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 01:01:28.262683 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:00.524) 0:12:01.796 ******** 2026-01-05 01:01:28.262687 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262691 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:01:28.262695 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:01:28.262699 | orchestrator | 2026-01-05 01:01:28.262702 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 01:01:28.262706 | orchestrator | Monday 05 January 2026 01:01:26 +0000 (0:00:00.323) 0:12:02.120 ******** 2026-01-05 01:01:28.262710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:01:28.262714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:01:28.262717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:01:28.262721 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:01:28.262725 | orchestrator | 2026-01-05 01:01:28.262729 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 01:01:28.262732 | orchestrator | Monday 05 January 2026 01:01:26 +0000 (0:00:00.587) 0:12:02.707 ******** 2026-01-05 01:01:28.262740 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:01:28.262743 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:01:28.262747 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:01:28.262751 | orchestrator | 2026-01-05 01:01:28.262755 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:28.262759 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-05 01:01:28.262763 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-05 01:01:28.262767 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-05 01:01:28.262771 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-05 01:01:28.262774 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-05 01:01:28.262781 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-05 01:01:28.262785 | orchestrator | 2026-01-05 01:01:28.262789 | orchestrator | 2026-01-05 01:01:28.262837 | orchestrator | 2026-01-05 01:01:28.262846 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:28.262850 | orchestrator | Monday 05 January 2026 01:01:26 +0000 (0:00:00.219) 0:12:02.926 ******** 2026-01-05 01:01:28.262854 | orchestrator | =============================================================================== 2026-01-05 01:01:28.262858 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.39s 2026-01-05 01:01:28.262862 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.53s 2026-01-05 01:01:28.262865 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.57s 2026-01-05 01:01:28.262869 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.42s 2026-01-05 01:01:28.262873 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.49s 2026-01-05 01:01:28.262877 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-01-05 01:01:28.262880 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.69s 2026-01-05 01:01:28.262884 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.36s 2026-01-05 01:01:28.262888 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.63s 2026-01-05 01:01:28.262892 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.96s 2026-01-05 01:01:28.262896 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.49s 2026-01-05 01:01:28.262899 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 5.22s 2026-01-05 01:01:28.262903 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.06s 2026-01-05 01:01:28.262907 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.88s 2026-01-05 01:01:28.262911 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.82s 2026-01-05 01:01:28.262915 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.82s 2026-01-05 01:01:28.262918 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.67s 2026-01-05 01:01:28.262922 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.25s 2026-01-05 01:01:28.262926 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.21s 2026-01-05 01:01:28.262930 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 4.08s 2026-01-05 01:01:28.262937 | orchestrator | 2026-01-05 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:31.287719 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:31.294278 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:31.295780 | orchestrator | 2026-01-05 01:01:31 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:31.295832 | orchestrator | 2026-01-05 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:34.338954 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:34.339250 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:34.340185 | orchestrator | 2026-01-05 01:01:34 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:34.340209 | orchestrator | 2026-01-05 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:37.394459 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:37.397119 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:37.399353 | orchestrator | 2026-01-05 01:01:37 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:37.399432 | orchestrator | 2026-01-05 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:40.452292 | orchestrator | 2026-01-05 01:01:40 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:40.457156 | orchestrator | 2026-01-05 01:01:40 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:40.459258 | orchestrator | 2026-01-05 01:01:40 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:40.459333 | orchestrator | 2026-01-05 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:43.514326 | orchestrator | 2026-01-05 01:01:43 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:43.516606 | orchestrator | 2026-01-05 01:01:43 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:43.519148 | orchestrator | 2026-01-05 01:01:43 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:43.519205 | orchestrator | 2026-01-05 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:46.559442 | orchestrator | 2026-01-05 01:01:46 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:46.561437 | orchestrator | 2026-01-05 01:01:46 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:46.562982 | orchestrator | 2026-01-05 01:01:46 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:46.563055 | orchestrator | 2026-01-05 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:49.593649 | orchestrator | 2026-01-05 01:01:49 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:49.595058 | orchestrator | 2026-01-05 01:01:49 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:49.596050 | orchestrator | 2026-01-05 01:01:49 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:49.596506 | orchestrator | 2026-01-05 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:52.648939 | orchestrator | 2026-01-05 01:01:52 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:52.651606 | orchestrator | 2026-01-05 01:01:52 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:52.654709 | orchestrator | 2026-01-05 01:01:52 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:52.654849 | orchestrator | 2026-01-05 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:55.696568 | orchestrator | 2026-01-05 01:01:55 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state STARTED 2026-01-05 01:01:55.696895 | orchestrator | 2026-01-05 01:01:55 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:55.697540 | orchestrator | 2026-01-05 01:01:55 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:55.697567 | orchestrator | 2026-01-05 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:01:58.740234 | orchestrator | 2026-01-05 01:01:58 | INFO  | Task be4b49fc-ca60-422e-935c-4fef4fd9f567 is in state SUCCESS 2026-01-05 01:01:58.741452 | orchestrator | 2026-01-05 01:01:58.741512 | orchestrator | 2026-01-05 01:01:58.741523 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:58.741531 | orchestrator | 2026-01-05 01:01:58.741553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:58.741561 | orchestrator | Monday 05 January 2026 00:59:19 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-05 01:01:58.741568 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:58.741576 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:58.741584 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:58.741591 | orchestrator | 2026-01-05 01:01:58.741598 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:58.741604 | orchestrator | Monday 05 January 2026 00:59:20 +0000 (0:00:00.351) 0:00:00.598 ******** 2026-01-05 01:01:58.741612 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-05 01:01:58.741620 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-05 01:01:58.741626 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-05 01:01:58.741630 | orchestrator | 2026-01-05 01:01:58.741634 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-05 01:01:58.741638 | orchestrator | 2026-01-05 01:01:58.741642 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 01:01:58.741646 | orchestrator | Monday 05 January 2026 00:59:20 +0000 (0:00:00.460) 0:00:01.058 ******** 2026-01-05 01:01:58.741650 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:58.741655 | orchestrator | 2026-01-05 01:01:58.741659 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-05 01:01:58.741663 | orchestrator | Monday 05 January 2026 00:59:21 +0000 (0:00:00.523) 0:00:01.582 ******** 2026-01-05 01:01:58.741667 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 01:01:58.741670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 01:01:58.741674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 01:01:58.741678 | orchestrator | 2026-01-05 01:01:58.741682 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-05 01:01:58.741686 | orchestrator | Monday 05 January 2026 00:59:21 +0000 (0:00:00.648) 0:00:02.231 ******** 2026-01-05 01:01:58.741692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.741719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.741735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.741745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.741777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.741913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.741922 | orchestrator | 2026-01-05 01:01:58.741926 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 01:01:58.741930 | orchestrator | Monday 05 January 2026 00:59:23 +0000 (0:00:01.890) 0:00:04.122 ******** 2026-01-05 01:01:58.741934 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:58.741938 | orchestrator | 2026-01-05 01:01:58.741942 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-05 01:01:58.741946 | orchestrator | Monday 05 January 2026 00:59:24 +0000 (0:00:00.549) 0:00:04.671 ******** 2026-01-05 01:01:58.741960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.741966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.741970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742075 | orchestrator | 2026-01-05 01:01:58.742079 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-05 01:01:58.742083 | orchestrator | Monday 05 January 2026 00:59:26 +0000 (0:00:02.730) 0:00:07.402 ******** 2026-01-05 01:01:58.742091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 01:01:58.742095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 01:01:58.742100 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:58.742104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 01:01:58.742114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 01:01:58.742119 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:58.742127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 01:01:58.742131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 01:01:58.742135 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:58.742139 | orchestrator | 2026-01-05 01:01:58.742143 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-05 01:01:58.742147 | orchestrator | Monday 05 January 2026 00:59:28 +0000 (0:00:01.631) 0:00:09.034 ******** 2026-01-05 01:01:58.742151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 01:01:58.742161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 01:01:58.742169 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:58.742173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 01:01:58.742177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 01:01:58.742181 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:58.742185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 01:01:58.742213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 01:01:58.742225 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:58.742229 | orchestrator | 2026-01-05 01:01:58.742233 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-05 01:01:58.742237 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:01.214) 0:00:10.248 ******** 2026-01-05 01:01:58.742241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742283 | orchestrator | 2026-01-05 01:01:58.742287 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-05 01:01:58.742291 | orchestrator | Monday 05 January 2026 00:59:32 +0000 (0:00:02.683) 0:00:12.931 ******** 2026-01-05 01:01:58.742295 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:58.742299 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:58.742303 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:58.742307 | orchestrator | 2026-01-05 01:01:58.742311 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-05 01:01:58.742315 | orchestrator | Monday 05 January 2026 00:59:35 +0000 (0:00:03.162) 0:00:16.094 ******** 2026-01-05 01:01:58.742318 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:58.742322 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:58.742326 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:58.742330 | orchestrator | 2026-01-05 01:01:58.742334 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-05 01:01:58.742338 | orchestrator | Monday 05 January 2026 00:59:37 +0000 (0:00:01.947) 0:00:18.041 ******** 2026-01-05 01:01:58.742342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:58.742365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:58.742389 | orchestrator | 2026-01-05 01:01:58.742393 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 01:01:58.742397 | orchestrator | Monday 05 January 2026 00:59:39 +0000 (0:00:02.314) 0:00:20.355 ******** 2026-01-05 01:01:58.742401 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:58.742405 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:58.742409 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:58.742413 | orchestrator | 2026-01-05 01:01:58.742417 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 01:01:58.742420 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.342) 0:00:20.698 ******** 2026-01-05 01:01:58.742425 | orchestrator | 2026-01-05 01:01:58.742428 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 01:01:58.742432 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.086) 0:00:20.785 ******** 2026-01-05 01:01:58.742436 | orchestrator | 2026-01-05 01:01:58.742440 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 01:01:58.742444 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.070) 0:00:20.855 ******** 2026-01-05 01:01:58.742447 | orchestrator | 2026-01-05 01:01:58.742451 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-05 01:01:58.742455 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.073) 0:00:20.928 ******** 2026-01-05 01:01:58.742459 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:58.742463 | orchestrator | 2026-01-05 01:01:58.742467 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-05 01:01:58.742470 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:00.220) 0:00:21.149 ******** 2026-01-05 01:01:58.742474 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:58.742478 | orchestrator | 2026-01-05 01:01:58.742482 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-05 01:01:58.742486 | orchestrator | Monday 05 January 2026 00:59:41 +0000 (0:00:00.784) 0:00:21.934 ******** 2026-01-05 01:01:58.742489 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:58.742493 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:58.742497 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:58.742501 | orchestrator | 2026-01-05 01:01:58.742504 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-05 01:01:58.742508 | orchestrator | Monday 05 January 2026 01:00:32 +0000 (0:00:50.833) 0:01:12.767 ******** 2026-01-05 01:01:58.742512 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:58.742516 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:58.742520 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:58.742523 | orchestrator | 2026-01-05 01:01:58.742527 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 01:01:58.742531 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:01:12.870) 0:02:25.637 ******** 2026-01-05 01:01:58.742535 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:58.742539 | orchestrator | 2026-01-05 01:01:58.742543 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-05 01:01:58.742553 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:00.675) 0:02:26.313 ******** 2026-01-05 01:01:58.742557 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:58.742561 | orchestrator | 2026-01-05 01:01:58.742565 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-05 01:01:58.742569 | orchestrator | Monday 05 January 2026 01:01:48 +0000 (0:00:02.776) 0:02:29.089 ******** 2026-01-05 01:01:58.742573 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:58.742577 | orchestrator | 2026-01-05 01:01:58.742581 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-05 01:01:58.742585 | orchestrator | Monday 05 January 2026 01:01:51 +0000 (0:00:02.637) 0:02:31.726 ******** 2026-01-05 01:01:58.742588 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:58.742592 | orchestrator | 2026-01-05 01:01:58.742596 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-05 01:01:58.742600 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:03.152) 0:02:34.879 ******** 2026-01-05 01:01:58.742606 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:58.742612 | orchestrator | 2026-01-05 01:01:58.742618 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:58.742625 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:01:58.742634 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:01:58.742640 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:01:58.742645 | orchestrator | 2026-01-05 01:01:58.742651 | orchestrator | 2026-01-05 01:01:58.742657 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:58.742667 | orchestrator | Monday 05 January 2026 01:01:57 +0000 (0:00:02.886) 0:02:37.765 ******** 2026-01-05 01:01:58.742677 | orchestrator | =============================================================================== 2026-01-05 01:01:58.742683 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 72.87s 2026-01-05 01:01:58.742689 | orchestrator | opensearch : Restart opensearch container ------------------------------ 50.83s 2026-01-05 01:01:58.742695 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.16s 2026-01-05 01:01:58.742701 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.15s 2026-01-05 01:01:58.742708 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.89s 2026-01-05 01:01:58.742714 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.78s 2026-01-05 01:01:58.742721 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.73s 2026-01-05 01:01:58.742727 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.68s 2026-01-05 01:01:58.742733 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.64s 2026-01-05 01:01:58.742739 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.31s 2026-01-05 01:01:58.742745 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.95s 2026-01-05 01:01:58.742817 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.89s 2026-01-05 01:01:58.742823 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.63s 2026-01-05 01:01:58.742828 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.21s 2026-01-05 01:01:58.742833 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.78s 2026-01-05 01:01:58.742838 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-01-05 01:01:58.742842 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2026-01-05 01:01:58.742852 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-01-05 01:01:58.742856 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-01-05 01:01:58.742860 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-05 01:01:58.742865 | orchestrator | 2026-01-05 01:01:58 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:01:58.743723 | orchestrator | 2026-01-05 01:01:58 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:01:58.743778 | orchestrator | 2026-01-05 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:01.797057 | orchestrator | 2026-01-05 01:02:01 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:01.798982 | orchestrator | 2026-01-05 01:02:01 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:01.799036 | orchestrator | 2026-01-05 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:04.853945 | orchestrator | 2026-01-05 01:02:04 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:04.855665 | orchestrator | 2026-01-05 01:02:04 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:04.855780 | orchestrator | 2026-01-05 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:07.897873 | orchestrator | 2026-01-05 01:02:07 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:07.898829 | orchestrator | 2026-01-05 01:02:07 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:07.898929 | orchestrator | 2026-01-05 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:10.947939 | orchestrator | 2026-01-05 01:02:10 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:10.950153 | orchestrator | 2026-01-05 01:02:10 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:10.950237 | orchestrator | 2026-01-05 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:14.001023 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:14.003157 | orchestrator | 2026-01-05 01:02:14 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:14.003639 | orchestrator | 2026-01-05 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:17.045413 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:17.046291 | orchestrator | 2026-01-05 01:02:17 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:17.046340 | orchestrator | 2026-01-05 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:20.089120 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:20.090986 | orchestrator | 2026-01-05 01:02:20 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:20.091084 | orchestrator | 2026-01-05 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:23.131904 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:23.131988 | orchestrator | 2026-01-05 01:02:23 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:23.132002 | orchestrator | 2026-01-05 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:26.174583 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:26.175166 | orchestrator | 2026-01-05 01:02:26 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:26.175222 | orchestrator | 2026-01-05 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:29.229417 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state STARTED 2026-01-05 01:02:29.231369 | orchestrator | 2026-01-05 01:02:29 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:29.231408 | orchestrator | 2026-01-05 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:32.281864 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task af560da7-6454-40d3-b3d0-98778f7a574e is in state SUCCESS 2026-01-05 01:02:32.285213 | orchestrator | 2026-01-05 01:02:32.285333 | orchestrator | 2026-01-05 01:02:32.285359 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-05 01:02:32.285380 | orchestrator | 2026-01-05 01:02:32.285404 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-05 01:02:32.285551 | orchestrator | Monday 05 January 2026 00:59:19 +0000 (0:00:00.093) 0:00:00.093 ******** 2026-01-05 01:02:32.285581 | orchestrator | ok: [localhost] => { 2026-01-05 01:02:32.285602 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-05 01:02:32.285623 | orchestrator | } 2026-01-05 01:02:32.285644 | orchestrator | 2026-01-05 01:02:32.285664 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-05 01:02:32.285682 | orchestrator | Monday 05 January 2026 00:59:19 +0000 (0:00:00.037) 0:00:00.131 ******** 2026-01-05 01:02:32.285702 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-05 01:02:32.285760 | orchestrator | ...ignoring 2026-01-05 01:02:32.285779 | orchestrator | 2026-01-05 01:02:32.285798 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-05 01:02:32.285848 | orchestrator | Monday 05 January 2026 00:59:22 +0000 (0:00:02.820) 0:00:02.952 ******** 2026-01-05 01:02:32.285868 | orchestrator | skipping: [localhost] 2026-01-05 01:02:32.285887 | orchestrator | 2026-01-05 01:02:32.285904 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-05 01:02:32.285923 | orchestrator | Monday 05 January 2026 00:59:22 +0000 (0:00:00.066) 0:00:03.019 ******** 2026-01-05 01:02:32.285941 | orchestrator | ok: [localhost] 2026-01-05 01:02:32.285959 | orchestrator | 2026-01-05 01:02:32.285976 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:02:32.285995 | orchestrator | 2026-01-05 01:02:32.286013 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:02:32.286101 | orchestrator | Monday 05 January 2026 00:59:22 +0000 (0:00:00.164) 0:00:03.183 ******** 2026-01-05 01:02:32.286114 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.286125 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.286137 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.286148 | orchestrator | 2026-01-05 01:02:32.286162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:02:32.286182 | orchestrator | Monday 05 January 2026 00:59:23 +0000 (0:00:00.370) 0:00:03.553 ******** 2026-01-05 01:02:32.286201 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 01:02:32.286222 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 01:02:32.286243 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 01:02:32.286263 | orchestrator | 2026-01-05 01:02:32.286283 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 01:02:32.286329 | orchestrator | 2026-01-05 01:02:32.286349 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 01:02:32.286368 | orchestrator | Monday 05 January 2026 00:59:23 +0000 (0:00:00.654) 0:00:04.207 ******** 2026-01-05 01:02:32.286386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:02:32.286407 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 01:02:32.286426 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 01:02:32.286448 | orchestrator | 2026-01-05 01:02:32.286466 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:02:32.286485 | orchestrator | Monday 05 January 2026 00:59:24 +0000 (0:00:00.452) 0:00:04.660 ******** 2026-01-05 01:02:32.286497 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:02:32.286509 | orchestrator | 2026-01-05 01:02:32.286520 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-05 01:02:32.286531 | orchestrator | Monday 05 January 2026 00:59:24 +0000 (0:00:00.535) 0:00:05.196 ******** 2026-01-05 01:02:32.286591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.286610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.286640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.286653 | orchestrator | 2026-01-05 01:02:32.286674 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-05 01:02:32.286686 | orchestrator | Monday 05 January 2026 00:59:28 +0000 (0:00:03.615) 0:00:08.811 ******** 2026-01-05 01:02:32.286697 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.286740 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.286759 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.286771 | orchestrator | 2026-01-05 01:02:32.286781 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-05 01:02:32.286792 | orchestrator | Monday 05 January 2026 00:59:29 +0000 (0:00:01.017) 0:00:09.829 ******** 2026-01-05 01:02:32.286803 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.286814 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.286825 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.286836 | orchestrator | 2026-01-05 01:02:32.286847 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-05 01:02:32.286857 | orchestrator | Monday 05 January 2026 00:59:31 +0000 (0:00:01.648) 0:00:11.477 ******** 2026-01-05 01:02:32.286869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.286903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.286917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.286937 | orchestrator | 2026-01-05 01:02:32.286948 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-05 01:02:32.286959 | orchestrator | Monday 05 January 2026 00:59:34 +0000 (0:00:03.919) 0:00:15.397 ******** 2026-01-05 01:02:32.286970 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.286981 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.286992 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.287003 | orchestrator | 2026-01-05 01:02:32.287013 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-05 01:02:32.287024 | orchestrator | Monday 05 January 2026 00:59:36 +0000 (0:00:01.223) 0:00:16.620 ******** 2026-01-05 01:02:32.287035 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:32.287046 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.287057 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:32.287067 | orchestrator | 2026-01-05 01:02:32.287078 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:02:32.287089 | orchestrator | Monday 05 January 2026 00:59:40 +0000 (0:00:04.315) 0:00:20.935 ******** 2026-01-05 01:02:32.287105 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:02:32.287116 | orchestrator | 2026-01-05 01:02:32.287127 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 01:02:32.287138 | orchestrator | Monday 05 January 2026 00:59:41 +0000 (0:00:00.597) 0:00:21.533 ******** 2026-01-05 01:02:32.287158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287178 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.287190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287202 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.287227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287246 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.287257 | orchestrator | 2026-01-05 01:02:32.287268 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 01:02:32.287279 | orchestrator | Monday 05 January 2026 00:59:44 +0000 (0:00:03.665) 0:00:25.199 ******** 2026-01-05 01:02:32.287290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287302 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.287325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287343 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.287355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287367 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.287378 | orchestrator | 2026-01-05 01:02:32.287389 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 01:02:32.287400 | orchestrator | Monday 05 January 2026 00:59:48 +0000 (0:00:03.693) 0:00:28.893 ******** 2026-01-05 01:02:32.287417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287442 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.287463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287475 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.287492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:32.287504 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.287515 | orchestrator | 2026-01-05 01:02:32.287526 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-05 01:02:32.287543 | orchestrator | Monday 05 January 2026 00:59:51 +0000 (0:00:03.541) 0:00:32.434 ******** 2026-01-05 01:02:32.287564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.287582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.287604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:32.287624 | orchestrator | 2026-01-05 01:02:32.287635 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-05 01:02:32.287651 | orchestrator | Monday 05 January 2026 00:59:55 +0000 (0:00:03.708) 0:00:36.143 ******** 2026-01-05 01:02:32.287669 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:32.287687 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.287747 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:32.287764 | orchestrator | 2026-01-05 01:02:32.287783 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-05 01:02:32.287795 | orchestrator | Monday 05 January 2026 00:59:56 +0000 (0:00:00.900) 0:00:37.043 ******** 2026-01-05 01:02:32.287806 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.287817 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.287828 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.287839 | orchestrator | 2026-01-05 01:02:32.287850 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-05 01:02:32.287861 | orchestrator | Monday 05 January 2026 00:59:57 +0000 (0:00:00.698) 0:00:37.742 ******** 2026-01-05 01:02:32.287872 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.287883 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.287893 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.287904 | orchestrator | 2026-01-05 01:02:32.287915 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-05 01:02:32.287926 | orchestrator | Monday 05 January 2026 00:59:57 +0000 (0:00:00.427) 0:00:38.170 ******** 2026-01-05 01:02:32.287938 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-05 01:02:32.287949 | orchestrator | ...ignoring 2026-01-05 01:02:32.287961 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-05 01:02:32.287972 | orchestrator | ...ignoring 2026-01-05 01:02:32.287983 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-05 01:02:32.287994 | orchestrator | ...ignoring 2026-01-05 01:02:32.288004 | orchestrator | 2026-01-05 01:02:32.288016 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-05 01:02:32.288026 | orchestrator | Monday 05 January 2026 01:00:08 +0000 (0:00:10.925) 0:00:49.096 ******** 2026-01-05 01:02:32.288038 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.288057 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.288069 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.288079 | orchestrator | 2026-01-05 01:02:32.288090 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-05 01:02:32.288101 | orchestrator | Monday 05 January 2026 01:00:09 +0000 (0:00:00.456) 0:00:49.552 ******** 2026-01-05 01:02:32.288112 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.288123 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.288134 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.288145 | orchestrator | 2026-01-05 01:02:32.288156 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-05 01:02:32.288166 | orchestrator | Monday 05 January 2026 01:00:09 +0000 (0:00:00.675) 0:00:50.227 ******** 2026-01-05 01:02:32.288177 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.288188 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.288199 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.288209 | orchestrator | 2026-01-05 01:02:32.288220 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-05 01:02:32.288231 | orchestrator | Monday 05 January 2026 01:00:10 +0000 (0:00:00.509) 0:00:50.737 ******** 2026-01-05 01:02:32.288242 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.288253 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.288263 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.288274 | orchestrator | 2026-01-05 01:02:32.288285 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-05 01:02:32.288296 | orchestrator | Monday 05 January 2026 01:00:10 +0000 (0:00:00.475) 0:00:51.213 ******** 2026-01-05 01:02:32.288306 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.288317 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.288328 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.288339 | orchestrator | 2026-01-05 01:02:32.288390 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-05 01:02:32.288403 | orchestrator | Monday 05 January 2026 01:00:11 +0000 (0:00:00.433) 0:00:51.646 ******** 2026-01-05 01:02:32.288431 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.288449 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.288466 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.288482 | orchestrator | 2026-01-05 01:02:32.288499 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:02:32.288516 | orchestrator | Monday 05 January 2026 01:00:11 +0000 (0:00:00.752) 0:00:52.399 ******** 2026-01-05 01:02:32.288534 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.288551 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.288568 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-05 01:02:32.288585 | orchestrator | 2026-01-05 01:02:32.288603 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-05 01:02:32.288854 | orchestrator | Monday 05 January 2026 01:00:12 +0000 (0:00:00.420) 0:00:52.820 ******** 2026-01-05 01:02:32.288880 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.288892 | orchestrator | 2026-01-05 01:02:32.288903 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-05 01:02:32.288914 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:10.674) 0:01:03.495 ******** 2026-01-05 01:02:32.288925 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.288936 | orchestrator | 2026-01-05 01:02:32.288947 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:02:32.288959 | orchestrator | Monday 05 January 2026 01:00:23 +0000 (0:00:00.123) 0:01:03.618 ******** 2026-01-05 01:02:32.288970 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.288981 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.288992 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.289003 | orchestrator | 2026-01-05 01:02:32.289015 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-05 01:02:32.289050 | orchestrator | Monday 05 January 2026 01:00:24 +0000 (0:00:01.015) 0:01:04.634 ******** 2026-01-05 01:02:32.289071 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.289089 | orchestrator | 2026-01-05 01:02:32.289108 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-05 01:02:32.289121 | orchestrator | Monday 05 January 2026 01:00:32 +0000 (0:00:08.101) 0:01:12.736 ******** 2026-01-05 01:02:32.289132 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-01-05 01:02:32.289145 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.289156 | orchestrator | 2026-01-05 01:02:32.289166 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-05 01:02:32.289177 | orchestrator | Monday 05 January 2026 01:00:39 +0000 (0:00:07.315) 0:01:20.051 ******** 2026-01-05 01:02:32.289188 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.289199 | orchestrator | 2026-01-05 01:02:32.289209 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-05 01:02:32.289220 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:02.549) 0:01:22.600 ******** 2026-01-05 01:02:32.289231 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.289242 | orchestrator | 2026-01-05 01:02:32.289252 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-05 01:02:32.289263 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:00.126) 0:01:22.727 ******** 2026-01-05 01:02:32.289274 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.289285 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.289295 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.289306 | orchestrator | 2026-01-05 01:02:32.289317 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-05 01:02:32.289328 | orchestrator | Monday 05 January 2026 01:00:42 +0000 (0:00:00.336) 0:01:23.063 ******** 2026-01-05 01:02:32.289338 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.289349 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-05 01:02:32.289360 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:32.289371 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:32.289382 | orchestrator | 2026-01-05 01:02:32.289392 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 01:02:32.289411 | orchestrator | skipping: no hosts matched 2026-01-05 01:02:32.289422 | orchestrator | 2026-01-05 01:02:32.289433 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 01:02:32.289444 | orchestrator | 2026-01-05 01:02:32.289454 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 01:02:32.289466 | orchestrator | Monday 05 January 2026 01:00:43 +0000 (0:00:00.600) 0:01:23.663 ******** 2026-01-05 01:02:32.289479 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:32.289492 | orchestrator | 2026-01-05 01:02:32.289505 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 01:02:32.289518 | orchestrator | Monday 05 January 2026 01:01:00 +0000 (0:00:17.091) 0:01:40.755 ******** 2026-01-05 01:02:32.289532 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.289546 | orchestrator | 2026-01-05 01:02:32.289565 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 01:02:32.289585 | orchestrator | Monday 05 January 2026 01:01:15 +0000 (0:00:15.634) 0:01:56.390 ******** 2026-01-05 01:02:32.289604 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.289623 | orchestrator | 2026-01-05 01:02:32.289642 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 01:02:32.289654 | orchestrator | 2026-01-05 01:02:32.289664 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 01:02:32.289675 | orchestrator | Monday 05 January 2026 01:01:18 +0000 (0:00:02.263) 0:01:58.654 ******** 2026-01-05 01:02:32.289686 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:32.289697 | orchestrator | 2026-01-05 01:02:32.289734 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 01:02:32.289753 | orchestrator | Monday 05 January 2026 01:01:36 +0000 (0:00:18.532) 0:02:17.186 ******** 2026-01-05 01:02:32.289764 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.289776 | orchestrator | 2026-01-05 01:02:32.289787 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 01:02:32.289798 | orchestrator | Monday 05 January 2026 01:01:52 +0000 (0:00:15.626) 0:02:32.812 ******** 2026-01-05 01:02:32.289821 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.289833 | orchestrator | 2026-01-05 01:02:32.289844 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 01:02:32.289855 | orchestrator | 2026-01-05 01:02:32.289866 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 01:02:32.289877 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:02.328) 0:02:35.141 ******** 2026-01-05 01:02:32.289888 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.289899 | orchestrator | 2026-01-05 01:02:32.289910 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 01:02:32.289920 | orchestrator | Monday 05 January 2026 01:02:07 +0000 (0:00:12.421) 0:02:47.563 ******** 2026-01-05 01:02:32.289931 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.289942 | orchestrator | 2026-01-05 01:02:32.289953 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 01:02:32.289963 | orchestrator | Monday 05 January 2026 01:02:11 +0000 (0:00:04.665) 0:02:52.228 ******** 2026-01-05 01:02:32.289974 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.289985 | orchestrator | 2026-01-05 01:02:32.289996 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 01:02:32.290006 | orchestrator | 2026-01-05 01:02:32.290077 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 01:02:32.290092 | orchestrator | Monday 05 January 2026 01:02:14 +0000 (0:00:02.574) 0:02:54.803 ******** 2026-01-05 01:02:32.290103 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:02:32.290114 | orchestrator | 2026-01-05 01:02:32.290125 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-05 01:02:32.290136 | orchestrator | Monday 05 January 2026 01:02:14 +0000 (0:00:00.497) 0:02:55.301 ******** 2026-01-05 01:02:32.290146 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.290157 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.290168 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.290179 | orchestrator | 2026-01-05 01:02:32.290190 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-05 01:02:32.290200 | orchestrator | Monday 05 January 2026 01:02:17 +0000 (0:00:02.885) 0:02:58.186 ******** 2026-01-05 01:02:32.290211 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.290222 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.290233 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.290244 | orchestrator | 2026-01-05 01:02:32.290255 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-05 01:02:32.290266 | orchestrator | Monday 05 January 2026 01:02:20 +0000 (0:00:02.573) 0:03:00.760 ******** 2026-01-05 01:02:32.290276 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.290288 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.290298 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.290309 | orchestrator | 2026-01-05 01:02:32.290320 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-05 01:02:32.290331 | orchestrator | Monday 05 January 2026 01:02:23 +0000 (0:00:02.836) 0:03:03.597 ******** 2026-01-05 01:02:32.290342 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.290353 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.290364 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:32.290374 | orchestrator | 2026-01-05 01:02:32.290385 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-05 01:02:32.290404 | orchestrator | Monday 05 January 2026 01:02:25 +0000 (0:00:02.563) 0:03:06.161 ******** 2026-01-05 01:02:32.290415 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:32.290426 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:32.290437 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:32.290448 | orchestrator | 2026-01-05 01:02:32.290459 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 01:02:32.290470 | orchestrator | Monday 05 January 2026 01:02:28 +0000 (0:00:03.249) 0:03:09.410 ******** 2026-01-05 01:02:32.290481 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:32.290491 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:32.290502 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:32.290513 | orchestrator | 2026-01-05 01:02:32.290524 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:02:32.290541 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-05 01:02:32.290553 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-05 01:02:32.290566 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-05 01:02:32.290577 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-05 01:02:32.290588 | orchestrator | 2026-01-05 01:02:32.290599 | orchestrator | 2026-01-05 01:02:32.290610 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:02:32.290621 | orchestrator | Monday 05 January 2026 01:02:29 +0000 (0:00:00.254) 0:03:09.665 ******** 2026-01-05 01:02:32.290632 | orchestrator | =============================================================================== 2026-01-05 01:02:32.290642 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.62s 2026-01-05 01:02:32.290653 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.26s 2026-01-05 01:02:32.290664 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.42s 2026-01-05 01:02:32.290675 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2026-01-05 01:02:32.290685 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.68s 2026-01-05 01:02:32.290703 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.10s 2026-01-05 01:02:32.290746 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.32s 2026-01-05 01:02:32.290765 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.67s 2026-01-05 01:02:32.290784 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.59s 2026-01-05 01:02:32.290803 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.32s 2026-01-05 01:02:32.290823 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.92s 2026-01-05 01:02:32.290835 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.71s 2026-01-05 01:02:32.290846 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.69s 2026-01-05 01:02:32.290857 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.67s 2026-01-05 01:02:32.290867 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.62s 2026-01-05 01:02:32.290878 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.54s 2026-01-05 01:02:32.290890 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.25s 2026-01-05 01:02:32.290901 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.89s 2026-01-05 01:02:32.290912 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.84s 2026-01-05 01:02:32.290931 | orchestrator | Check MariaDB service --------------------------------------------------- 2.82s 2026-01-05 01:02:32.290942 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:32.290954 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:32.291212 | orchestrator | 2026-01-05 01:02:32 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:32.291230 | orchestrator | 2026-01-05 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:35.346727 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:35.349276 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:35.350355 | orchestrator | 2026-01-05 01:02:35 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:35.350420 | orchestrator | 2026-01-05 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:38.400106 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:38.401812 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:38.403818 | orchestrator | 2026-01-05 01:02:38 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:38.403872 | orchestrator | 2026-01-05 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:41.446722 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:41.449007 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:41.452757 | orchestrator | 2026-01-05 01:02:41 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:41.452846 | orchestrator | 2026-01-05 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:44.497474 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:44.497581 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:44.499330 | orchestrator | 2026-01-05 01:02:44 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:44.501440 | orchestrator | 2026-01-05 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:47.545323 | orchestrator | 2026-01-05 01:02:47 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:47.545514 | orchestrator | 2026-01-05 01:02:47 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:47.549821 | orchestrator | 2026-01-05 01:02:47 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:47.549914 | orchestrator | 2026-01-05 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:50.661215 | orchestrator | 2026-01-05 01:02:50 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:50.661541 | orchestrator | 2026-01-05 01:02:50 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:50.662639 | orchestrator | 2026-01-05 01:02:50 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:50.662745 | orchestrator | 2026-01-05 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:53.711578 | orchestrator | 2026-01-05 01:02:53 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:53.711759 | orchestrator | 2026-01-05 01:02:53 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:53.712251 | orchestrator | 2026-01-05 01:02:53 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:53.712280 | orchestrator | 2026-01-05 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:56.756077 | orchestrator | 2026-01-05 01:02:56 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:56.756194 | orchestrator | 2026-01-05 01:02:56 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:56.756873 | orchestrator | 2026-01-05 01:02:56 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:56.756927 | orchestrator | 2026-01-05 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:02:59.795259 | orchestrator | 2026-01-05 01:02:59 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:02:59.798931 | orchestrator | 2026-01-05 01:02:59 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:02:59.801464 | orchestrator | 2026-01-05 01:02:59 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:02:59.801692 | orchestrator | 2026-01-05 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:02.837109 | orchestrator | 2026-01-05 01:03:02 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:02.837300 | orchestrator | 2026-01-05 01:03:02 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:02.838480 | orchestrator | 2026-01-05 01:03:02 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:02.838558 | orchestrator | 2026-01-05 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:05.878934 | orchestrator | 2026-01-05 01:03:05 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:05.880359 | orchestrator | 2026-01-05 01:03:05 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:05.882698 | orchestrator | 2026-01-05 01:03:05 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:05.882734 | orchestrator | 2026-01-05 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:08.920302 | orchestrator | 2026-01-05 01:03:08 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:08.921411 | orchestrator | 2026-01-05 01:03:08 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:08.921973 | orchestrator | 2026-01-05 01:03:08 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:08.922003 | orchestrator | 2026-01-05 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:11.978716 | orchestrator | 2026-01-05 01:03:11 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:11.978803 | orchestrator | 2026-01-05 01:03:11 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:11.980738 | orchestrator | 2026-01-05 01:03:11 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:11.980819 | orchestrator | 2026-01-05 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:15.029051 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:15.029655 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:15.030840 | orchestrator | 2026-01-05 01:03:15 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:15.030881 | orchestrator | 2026-01-05 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:18.084001 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:18.086221 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:18.088300 | orchestrator | 2026-01-05 01:03:18 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:18.088333 | orchestrator | 2026-01-05 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:21.145397 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:21.147706 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:21.149367 | orchestrator | 2026-01-05 01:03:21 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:21.149433 | orchestrator | 2026-01-05 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:24.194505 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:24.195956 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:24.198950 | orchestrator | 2026-01-05 01:03:24 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:24.199119 | orchestrator | 2026-01-05 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:27.249095 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:27.250504 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:27.252637 | orchestrator | 2026-01-05 01:03:27 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:27.252688 | orchestrator | 2026-01-05 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:30.305080 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:30.305189 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:30.305352 | orchestrator | 2026-01-05 01:03:30 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:30.306123 | orchestrator | 2026-01-05 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:33.365425 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:33.370478 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:33.374406 | orchestrator | 2026-01-05 01:03:33 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:33.374466 | orchestrator | 2026-01-05 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:36.452820 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:36.455359 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:36.458597 | orchestrator | 2026-01-05 01:03:36 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:36.458717 | orchestrator | 2026-01-05 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:39.498973 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:39.499299 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state STARTED 2026-01-05 01:03:39.500634 | orchestrator | 2026-01-05 01:03:39 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:39.500731 | orchestrator | 2026-01-05 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:42.533310 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:03:42.533418 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:03:42.533853 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:03:42.534738 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:42.537384 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 08bb32eb-ce60-4e7f-9a99-a77265b0be88 is in state SUCCESS 2026-01-05 01:03:42.539293 | orchestrator | 2026-01-05 01:03:42.539491 | orchestrator | 2026-01-05 01:03:42.539509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:03:42.539732 | orchestrator | 2026-01-05 01:03:42.539762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:03:42.539779 | orchestrator | Monday 05 January 2026 01:02:34 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-01-05 01:03:42.539795 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:42.539813 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:42.539827 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:42.539842 | orchestrator | 2026-01-05 01:03:42.539858 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:03:42.539874 | orchestrator | Monday 05 January 2026 01:02:34 +0000 (0:00:00.328) 0:00:00.585 ******** 2026-01-05 01:03:42.539891 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 01:03:42.540348 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 01:03:42.540380 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 01:03:42.540400 | orchestrator | 2026-01-05 01:03:42.540417 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-05 01:03:42.540435 | orchestrator | 2026-01-05 01:03:42.540452 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:03:42.540468 | orchestrator | Monday 05 January 2026 01:02:34 +0000 (0:00:00.464) 0:00:01.049 ******** 2026-01-05 01:03:42.540484 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:42.540502 | orchestrator | 2026-01-05 01:03:42.540520 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-05 01:03:42.540537 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:00.549) 0:00:01.598 ******** 2026-01-05 01:03:42.540562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.540668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.540714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.540736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.540755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.540771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.540805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.540832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.541399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.541424 | orchestrator | 2026-01-05 01:03:42.541682 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-05 01:03:42.541699 | orchestrator | Monday 05 January 2026 01:02:37 +0000 (0:00:01.973) 0:00:03.572 ******** 2026-01-05 01:03:42.541709 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.541720 | orchestrator | 2026-01-05 01:03:42.541766 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-05 01:03:42.541778 | orchestrator | Monday 05 January 2026 01:02:37 +0000 (0:00:00.138) 0:00:03.710 ******** 2026-01-05 01:03:42.541788 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.541799 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.541808 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.541818 | orchestrator | 2026-01-05 01:03:42.541828 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-05 01:03:42.541838 | orchestrator | Monday 05 January 2026 01:02:37 +0000 (0:00:00.499) 0:00:04.209 ******** 2026-01-05 01:03:42.541847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:03:42.541857 | orchestrator | 2026-01-05 01:03:42.541867 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:03:42.541876 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:00.918) 0:00:05.127 ******** 2026-01-05 01:03:42.541886 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:03:42.541896 | orchestrator | 2026-01-05 01:03:42.541905 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-05 01:03:42.541915 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.619) 0:00:05.747 ******** 2026-01-05 01:03:42.541942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.541977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.542004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.542145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.542168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.542194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.542205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.542222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.542232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.542243 | orchestrator | 2026-01-05 01:03:42.542253 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-05 01:03:42.542263 | orchestrator | Monday 05 January 2026 01:02:43 +0000 (0:00:03.517) 0:00:09.265 ******** 2026-01-05 01:03:42.542305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.542325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.542338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.542350 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.542367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.542384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.542403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.542421 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.542483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.542521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.542540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.542558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.542576 | orchestrator | 2026-01-05 01:03:42.542631 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-05 01:03:42.542648 | orchestrator | Monday 05 January 2026 01:02:43 +0000 (0:00:00.802) 0:00:10.067 ******** 2026-01-05 01:03:42.542675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.542694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.542770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.542788 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.542804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.542820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.542844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.542862 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.542881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.542961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.542981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.542999 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.543014 | orchestrator | 2026-01-05 01:03:42.543031 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-05 01:03:42.543046 | orchestrator | Monday 05 January 2026 01:02:44 +0000 (0:00:00.838) 0:00:10.905 ******** 2026-01-05 01:03:42.543065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.543091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.543157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.543182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543276 | orchestrator | 2026-01-05 01:03:42.543286 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-05 01:03:42.543296 | orchestrator | Monday 05 January 2026 01:02:47 +0000 (0:00:03.293) 0:00:14.198 ******** 2026-01-05 01:03:42.543347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.543367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.543384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.543409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.543469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.543491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.543501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.543532 | orchestrator | 2026-01-05 01:03:42.543542 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-05 01:03:42.543551 | orchestrator | Monday 05 January 2026 01:02:54 +0000 (0:00:06.306) 0:00:20.505 ******** 2026-01-05 01:03:42.543561 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:42.543571 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:03:42.543585 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:03:42.543626 | orchestrator | 2026-01-05 01:03:42.543637 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-05 01:03:42.543653 | orchestrator | Monday 05 January 2026 01:02:55 +0000 (0:00:01.583) 0:00:22.088 ******** 2026-01-05 01:03:42.543663 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.543672 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.543682 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.543691 | orchestrator | 2026-01-05 01:03:42.543701 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-05 01:03:42.543711 | orchestrator | Monday 05 January 2026 01:02:56 +0000 (0:00:00.590) 0:00:22.679 ******** 2026-01-05 01:03:42.543721 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.543730 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.543740 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.543749 | orchestrator | 2026-01-05 01:03:42.543759 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-05 01:03:42.543769 | orchestrator | Monday 05 January 2026 01:02:56 +0000 (0:00:00.306) 0:00:22.985 ******** 2026-01-05 01:03:42.543778 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.543788 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.543798 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.543807 | orchestrator | 2026-01-05 01:03:42.543817 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-05 01:03:42.543829 | orchestrator | Monday 05 January 2026 01:02:57 +0000 (0:00:00.545) 0:00:23.530 ******** 2026-01-05 01:03:42.543889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.543910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.543927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.543945 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.543968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.543996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.544059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.544080 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.544098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:03:42.544116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:03:42.544133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:03:42.544161 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.544178 | orchestrator | 2026-01-05 01:03:42.544194 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:03:42.544212 | orchestrator | Monday 05 January 2026 01:02:57 +0000 (0:00:00.629) 0:00:24.160 ******** 2026-01-05 01:03:42.544228 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.544245 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.544262 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.544278 | orchestrator | 2026-01-05 01:03:42.544293 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-05 01:03:42.544314 | orchestrator | Monday 05 January 2026 01:02:58 +0000 (0:00:00.309) 0:00:24.469 ******** 2026-01-05 01:03:42.544324 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:03:42.544335 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:03:42.544350 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:03:42.544366 | orchestrator | 2026-01-05 01:03:42.544382 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-05 01:03:42.544397 | orchestrator | Monday 05 January 2026 01:02:59 +0000 (0:00:01.705) 0:00:26.175 ******** 2026-01-05 01:03:42.544412 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:03:42.544427 | orchestrator | 2026-01-05 01:03:42.544444 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-05 01:03:42.544459 | orchestrator | Monday 05 January 2026 01:03:00 +0000 (0:00:00.931) 0:00:27.106 ******** 2026-01-05 01:03:42.544476 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.544493 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.544510 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.544526 | orchestrator | 2026-01-05 01:03:42.544540 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-05 01:03:42.544550 | orchestrator | Monday 05 January 2026 01:03:01 +0000 (0:00:00.900) 0:00:28.007 ******** 2026-01-05 01:03:42.544560 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:03:42.544569 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:03:42.544579 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:03:42.544617 | orchestrator | 2026-01-05 01:03:42.544629 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-05 01:03:42.544648 | orchestrator | Monday 05 January 2026 01:03:02 +0000 (0:00:01.194) 0:00:29.201 ******** 2026-01-05 01:03:42.544658 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:42.544668 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:42.544678 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:42.544687 | orchestrator | 2026-01-05 01:03:42.544697 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-05 01:03:42.544706 | orchestrator | Monday 05 January 2026 01:03:03 +0000 (0:00:00.382) 0:00:29.584 ******** 2026-01-05 01:03:42.544716 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:03:42.544726 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:03:42.544735 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:03:42.544750 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:03:42.544766 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:03:42.544856 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:03:42.544875 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:03:42.544891 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:03:42.544902 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:03:42.544911 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:03:42.544921 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:03:42.544931 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:03:42.544940 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:03:42.544950 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:03:42.544960 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:03:42.544969 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:03:42.544979 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:03:42.544989 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:03:42.545000 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:03:42.545017 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:03:42.545032 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:03:42.545048 | orchestrator | 2026-01-05 01:03:42.545064 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-05 01:03:42.545079 | orchestrator | Monday 05 January 2026 01:03:13 +0000 (0:00:09.796) 0:00:39.381 ******** 2026-01-05 01:03:42.545095 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:03:42.545108 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:03:42.545131 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:03:42.545149 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:03:42.545166 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:03:42.545183 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:03:42.545200 | orchestrator | 2026-01-05 01:03:42.545217 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-05 01:03:42.545233 | orchestrator | Monday 05 January 2026 01:03:16 +0000 (0:00:03.028) 0:00:42.409 ******** 2026-01-05 01:03:42.545267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.545307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.545327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:03:42.545353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.545371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.545383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:03:42.545501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.545517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.545527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:03:42.545537 | orchestrator | 2026-01-05 01:03:42.545547 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:03:42.545557 | orchestrator | Monday 05 January 2026 01:03:18 +0000 (0:00:02.590) 0:00:45.000 ******** 2026-01-05 01:03:42.545567 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.545577 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.545587 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.545628 | orchestrator | 2026-01-05 01:03:42.545638 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-05 01:03:42.545648 | orchestrator | Monday 05 January 2026 01:03:19 +0000 (0:00:00.328) 0:00:45.329 ******** 2026-01-05 01:03:42.545658 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:42.545667 | orchestrator | 2026-01-05 01:03:42.545677 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-05 01:03:42.545686 | orchestrator | Monday 05 January 2026 01:03:21 +0000 (0:00:02.619) 0:00:47.948 ******** 2026-01-05 01:03:42.545696 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:03:42.545705 | orchestrator | 2026-01-05 01:03:42.545715 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-05 01:03:42.545725 | orchestrator | Monday 05 January 2026 01:03:24 +0000 (0:00:02.531) 0:00:50.480 ******** 2026-01-05 01:03:42.545745 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:42.545755 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:42.545765 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:42.545774 | orchestrator | 2026-01-05 01:03:42.545790 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-05 01:03:42.545800 | orchestrator | Monday 05 January 2026 01:03:25 +0000 (0:00:00.836) 0:00:51.317 ******** 2026-01-05 01:03:42.545810 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:03:42.545819 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:03:42.545829 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:03:42.545846 | orchestrator | 2026-01-05 01:03:42.545856 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-05 01:03:42.545866 | orchestrator | Monday 05 January 2026 01:03:25 +0000 (0:00:00.500) 0:00:51.818 ******** 2026-01-05 01:03:42.545875 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:03:42.545885 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:03:42.545894 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:03:42.545904 | orchestrator | 2026-01-05 01:03:42.545914 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-05 01:03:42.545923 | orchestrator | Monday 05 January 2026 01:03:25 +0000 (0:00:00.312) 0:00:52.131 ******** 2026-01-05 01:03:42.546231 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\nINFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying service configuration files\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\nINFO:__main__:Setting permission for /usr/bin/keystone-startup.sh\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\nINFO:__main__:Setting permission for /etc/keystone/keystone.conf\nINFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla\nINFO:__main__:Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-05 01:03:37.890 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342\n2026-01-05 01:03:37.900 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 01:03:37.900 1079 ERROR keystone Traceback (most recent call last):\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 01:03:37.900 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-05 01:03:37.900 1079 ERROR keystone return self.pool.connect()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 01:03:37.900 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 01:03:37.900 1079 ERROR keystone rec = pool._do_get()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 01:03:37.900 1079 ERROR keystone with util.safe_reraise():\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 01:03:37.900 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 01:03:37.900 1079 ERROR keystone return self._create_connection()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 01:03:37.900 1079 ERROR keystone self.__connect()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 01:03:37.900 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 01:03:37.900 1079 ERROR keystone self(*args, **kw)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 01:03:37.900 1079 ERROR keystone fn(*args, **kw)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-05 01:03:37.900 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-05 01:03:37.900 1079 ERROR keystone dialect.initialize(c)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-05 01:03:37.900 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-05 01:03:37.900 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-05 01:03:37.900 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-05 01:03:37.900 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 01:03:37.900 1079 ERROR keystone result = self._query(query)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 01:03:37.900 1079 ERROR keystone conn.query(q)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 01:03:37.900 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 01:03:37.900 1079 ERROR keystone result.read()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 01:03:37.900 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 01:03:37.900 1079 ERROR keystone packet.raise_for_error()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 01:03:37.900 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 01:03:37.900 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 01:03:37.900 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 01:03:37.900 1079 ERROR keystone \n2026-01-05 01:03:37.900 1079 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-05 01:03:37.900 1079 ERROR keystone \n2026-01-05 01:03:37.900 1079 ERROR keystone Traceback (most recent call last):\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-05 01:03:37.900 1079 ERROR keystone sys.exit(main())\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-05 01:03:37.900 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main\n2026-01-05 01:03:37.900 1079 ERROR keystone CONF.command.cmd_class.main()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main\n2026-01-05 01:03:37.900 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version\n2026-01-05 01:03:37.900 1079 ERROR keystone _db_sync(engine=engine)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync\n2026-01-05 01:03:37.900 1079 ERROR keystone with sql.session_for_write() as session:\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 01:03:37.900 1079 ERROR keystone return next(self.gen)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope\n2026-01-05 01:03:37.900 1079 ERROR keystone with current._produce_block(\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 01:03:37.900 1079 ERROR keystone return next(self.gen)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session\n2026-01-05 01:03:37.900 1079 ERROR keystone self.session = self.factory._create_session(\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session\n2026-01-05 01:03:37.900 1079 ERROR keystone self._start()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start\n2026-01-05 01:03:37.900 1079 ERROR keystone self._setup_for_connection(\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection\n2026-01-05 01:03:37.900 1079 ERROR keystone engine = engines.create_engine(\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-05 01:03:37.900 1079 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine\n2026-01-05 01:03:37.900 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection\n2026-01-05 01:03:37.900 1079 ERROR keystone return engine.connect()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect\n2026-01-05 01:03:37.900 1079 ERROR keystone return self._connection_cls(self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-05 01:03:37.900 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection\n2026-01-05 01:03:37.900 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 01:03:37.900 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-05 01:03:37.900 1079 ERROR keystone return self.pool.connect()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 01:03:37.900 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 01:03:37.900 1079 ERROR keystone rec = pool._do_get()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 01:03:37.900 1079 ERROR keystone with util.safe_reraise():\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 01:03:37.900 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 01:03:37.900 1079 ERROR keystone return self._create_connection()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 01:03:37.900 1079 ERROR keystone self.__connect()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 01:03:37.900 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 01:03:37.900 1079 ERROR keystone self(*args, **kw)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 01:03:37.900 1079 ERROR keystone fn(*args, **kw)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-05 01:03:37.900 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-05 01:03:37.900 1079 ERROR keystone dialect.initialize(c)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-05 01:03:37.900 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-05 01:03:37.900 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-05 01:03:37.900 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-05 01:03:37.900 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 01:03:37.900 1079 ERROR keystone result = self._query(query)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 01:03:37.900 1079 ERROR keystone conn.query(q)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 01:03:37.900 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 01:03:37.900 1079 ERROR keystone result.read()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 01:03:37.900 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 01:03:37.900 1079 ERROR keystone packet.raise_for_error()\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 01:03:37.900 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 01:03:37.900 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 01:03:37.900 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 01:03:37.900 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 01:03:37.900 1079 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "INFO:__main__:Setting permission for /usr/bin/keystone-startup.sh", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "INFO:__main__:Setting permission for /etc/keystone/keystone.conf", "INFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla", "INFO:__main__:Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-05 01:03:37.890 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342", "2026-01-05 01:03:37.900 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 01:03:37.900 1079 ERROR keystone Traceback (most recent call last):", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 01:03:37.900 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-05 01:03:37.900 1079 ERROR keystone return self.pool.connect()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 01:03:37.900 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 01:03:37.900 1079 ERROR keystone rec = pool._do_get()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 01:03:37.900 1079 ERROR keystone with util.safe_reraise():", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 01:03:37.900 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 01:03:37.900 1079 ERROR keystone return self._create_connection()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 01:03:37.900 1079 ERROR keystone self.__connect()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 01:03:37.900 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 01:03:37.900 1079 ERROR keystone self(*args, **kw)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 01:03:37.900 1079 ERROR keystone fn(*args, **kw)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-05 01:03:37.900 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-05 01:03:37.900 1079 ERROR keystone dialect.initialize(c)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-05 01:03:37.900 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-05 01:03:37.900 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-05 01:03:37.900 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-05 01:03:37.900 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 01:03:37.900 1079 ERROR keystone result = self._query(query)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 01:03:37.900 1079 ERROR keystone conn.query(q)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 01:03:37.900 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 01:03:37.900 1079 ERROR keystone result.read()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 01:03:37.900 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 01:03:37.900 1079 ERROR keystone packet.raise_for_error()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 01:03:37.900 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 01:03:37.900 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 01:03:37.900 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 01:03:37.900 1079 ERROR keystone ", "2026-01-05 01:03:37.900 1079 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-05 01:03:37.900 1079 ERROR keystone ", "2026-01-05 01:03:37.900 1079 ERROR keystone Traceback (most recent call last):", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-05 01:03:37.900 1079 ERROR keystone sys.exit(main())", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-05 01:03:37.900 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main", "2026-01-05 01:03:37.900 1079 ERROR keystone CONF.command.cmd_class.main()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main", "2026-01-05 01:03:37.900 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version", "2026-01-05 01:03:37.900 1079 ERROR keystone _db_sync(engine=engine)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync", "2026-01-05 01:03:37.900 1079 ERROR keystone with sql.session_for_write() as session:", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 01:03:37.900 1079 ERROR keystone return next(self.gen)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope", "2026-01-05 01:03:37.900 1079 ERROR keystone with current._produce_block(", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 01:03:37.900 1079 ERROR keystone return next(self.gen)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session", "2026-01-05 01:03:37.900 1079 ERROR keystone self.session = self.factory._create_session(", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session", "2026-01-05 01:03:37.900 1079 ERROR keystone self._start()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start", "2026-01-05 01:03:37.900 1079 ERROR keystone self._setup_for_connection(", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection", "2026-01-05 01:03:37.900 1079 ERROR keystone engine = engines.create_engine(", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-05 01:03:37.900 1079 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine", "2026-01-05 01:03:37.900 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection", "2026-01-05 01:03:37.900 1079 ERROR keystone return engine.connect()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect", "2026-01-05 01:03:37.900 1079 ERROR keystone return self._connection_cls(self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-05 01:03:37.900 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection", "2026-01-05 01:03:37.900 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 01:03:37.900 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-05 01:03:37.900 1079 ERROR keystone return self.pool.connect()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 01:03:37.900 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 01:03:37.900 1079 ERROR keystone rec = pool._do_get()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 01:03:37.900 1079 ERROR keystone with util.safe_reraise():", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 01:03:37.900 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 01:03:37.900 1079 ERROR keystone return self._create_connection()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 01:03:37.900 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 01:03:37.900 1079 ERROR keystone self.__connect()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 01:03:37.900 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 01:03:37.900 1079 ERROR keystone self(*args, **kw)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 01:03:37.900 1079 ERROR keystone fn(*args, **kw)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-05 01:03:37.900 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-05 01:03:37.900 1079 ERROR keystone dialect.initialize(c)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-05 01:03:37.900 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-05 01:03:37.900 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-05 01:03:37.900 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-05 01:03:37.900 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 01:03:37.900 1079 ERROR keystone result = self._query(query)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 01:03:37.900 1079 ERROR keystone conn.query(q)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 01:03:37.900 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 01:03:37.900 1079 ERROR keystone result.read()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 01:03:37.900 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 01:03:37.900 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 01:03:37.900 1079 ERROR keystone packet.raise_for_error()", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 01:03:37.900 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 01:03:37.900 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 01:03:37.900 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 01:03:37.900 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 01:03:37.900 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 01:03:37.900 1079 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-05 01:03:42.546420 | orchestrator | 2026-01-05 01:03:42.546435 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:42.546447 | orchestrator | testbed-node-0 : ok=21  changed=11  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-01-05 01:03:42.546465 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:03:42.546476 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:03:42.546485 | orchestrator | 2026-01-05 01:03:42.546495 | orchestrator | 2026-01-05 01:03:42.546505 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:42.546514 | orchestrator | Monday 05 January 2026 01:03:39 +0000 (0:00:13.381) 0:01:05.512 ******** 2026-01-05 01:03:42.546524 | orchestrator | =============================================================================== 2026-01-05 01:03:42.546533 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.38s 2026-01-05 01:03:42.546543 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.80s 2026-01-05 01:03:42.546553 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.31s 2026-01-05 01:03:42.546562 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.52s 2026-01-05 01:03:42.546572 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.29s 2026-01-05 01:03:42.546581 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.03s 2026-01-05 01:03:42.546652 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.62s 2026-01-05 01:03:42.546665 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.59s 2026-01-05 01:03:42.546675 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.53s 2026-01-05 01:03:42.546684 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.97s 2026-01-05 01:03:42.546694 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.71s 2026-01-05 01:03:42.546704 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.58s 2026-01-05 01:03:42.546713 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.19s 2026-01-05 01:03:42.546723 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.93s 2026-01-05 01:03:42.546732 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.92s 2026-01-05 01:03:42.546742 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.90s 2026-01-05 01:03:42.546751 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.84s 2026-01-05 01:03:42.546761 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 0.84s 2026-01-05 01:03:42.546843 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.80s 2026-01-05 01:03:42.546855 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.63s 2026-01-05 01:03:42.546889 | orchestrator | 2026-01-05 01:03:42 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:42.546899 | orchestrator | 2026-01-05 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:45.579520 | orchestrator | 2026-01-05 01:03:45 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:03:45.579674 | orchestrator | 2026-01-05 01:03:45 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:03:45.580525 | orchestrator | 2026-01-05 01:03:45 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:03:45.581437 | orchestrator | 2026-01-05 01:03:45 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:45.582532 | orchestrator | 2026-01-05 01:03:45 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:45.582566 | orchestrator | 2026-01-05 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:48.633105 | orchestrator | 2026-01-05 01:03:48 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:03:48.634729 | orchestrator | 2026-01-05 01:03:48 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:03:48.635967 | orchestrator | 2026-01-05 01:03:48 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:03:48.639139 | orchestrator | 2026-01-05 01:03:48 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state STARTED 2026-01-05 01:03:48.646142 | orchestrator | 2026-01-05 01:03:48 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:48.646220 | orchestrator | 2026-01-05 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:51.702186 | orchestrator | 2026-01-05 01:03:51 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:03:51.705760 | orchestrator | 2026-01-05 01:03:51 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:03:51.708913 | orchestrator | 2026-01-05 01:03:51 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:03:51.711048 | orchestrator | 2026-01-05 01:03:51 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:03:51.715653 | orchestrator | 2026-01-05 01:03:51 | INFO  | Task 43444a8f-52ed-434e-8806-dfae922b92ce is in state SUCCESS 2026-01-05 01:03:51.718205 | orchestrator | 2026-01-05 01:03:51.718643 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:03:51.718681 | orchestrator | 2.16.14 2026-01-05 01:03:51.718705 | orchestrator | 2026-01-05 01:03:51.718725 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-05 01:03:51.718744 | orchestrator | 2026-01-05 01:03:51.718762 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 01:03:51.718782 | orchestrator | Monday 05 January 2026 01:01:31 +0000 (0:00:00.636) 0:00:00.636 ******** 2026-01-05 01:03:51.718800 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:03:51.718820 | orchestrator | 2026-01-05 01:03:51.718838 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 01:03:51.718857 | orchestrator | Monday 05 January 2026 01:01:32 +0000 (0:00:00.743) 0:00:01.379 ******** 2026-01-05 01:03:51.718875 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.718894 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720276 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720321 | orchestrator | 2026-01-05 01:03:51.720343 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 01:03:51.720362 | orchestrator | Monday 05 January 2026 01:01:33 +0000 (0:00:00.671) 0:00:02.051 ******** 2026-01-05 01:03:51.720382 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.720402 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720421 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720439 | orchestrator | 2026-01-05 01:03:51.720451 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 01:03:51.720462 | orchestrator | Monday 05 January 2026 01:01:33 +0000 (0:00:00.317) 0:00:02.368 ******** 2026-01-05 01:03:51.720472 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.720483 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720494 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720505 | orchestrator | 2026-01-05 01:03:51.720516 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 01:03:51.720527 | orchestrator | Monday 05 January 2026 01:01:34 +0000 (0:00:00.873) 0:00:03.241 ******** 2026-01-05 01:03:51.720537 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.720549 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720561 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720615 | orchestrator | 2026-01-05 01:03:51.720635 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 01:03:51.720652 | orchestrator | Monday 05 January 2026 01:01:34 +0000 (0:00:00.314) 0:00:03.556 ******** 2026-01-05 01:03:51.720669 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.720729 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720745 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720762 | orchestrator | 2026-01-05 01:03:51.720774 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 01:03:51.720784 | orchestrator | Monday 05 January 2026 01:01:35 +0000 (0:00:00.308) 0:00:03.864 ******** 2026-01-05 01:03:51.720793 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.720803 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720813 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720824 | orchestrator | 2026-01-05 01:03:51.720835 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 01:03:51.720848 | orchestrator | Monday 05 January 2026 01:01:35 +0000 (0:00:00.374) 0:00:04.239 ******** 2026-01-05 01:03:51.720859 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.720872 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.720884 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.720895 | orchestrator | 2026-01-05 01:03:51.720907 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 01:03:51.720919 | orchestrator | Monday 05 January 2026 01:01:36 +0000 (0:00:00.568) 0:00:04.807 ******** 2026-01-05 01:03:51.720929 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.720938 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.720951 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.720967 | orchestrator | 2026-01-05 01:03:51.721004 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 01:03:51.721028 | orchestrator | Monday 05 January 2026 01:01:36 +0000 (0:00:00.326) 0:00:05.134 ******** 2026-01-05 01:03:51.721043 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:03:51.721058 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:03:51.721074 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:03:51.721089 | orchestrator | 2026-01-05 01:03:51.721102 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 01:03:51.721116 | orchestrator | Monday 05 January 2026 01:01:37 +0000 (0:00:00.685) 0:00:05.819 ******** 2026-01-05 01:03:51.721129 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.721144 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.721176 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.721192 | orchestrator | 2026-01-05 01:03:51.721207 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 01:03:51.721222 | orchestrator | Monday 05 January 2026 01:01:37 +0000 (0:00:00.475) 0:00:06.295 ******** 2026-01-05 01:03:51.721238 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:03:51.721251 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:03:51.721264 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:03:51.721277 | orchestrator | 2026-01-05 01:03:51.721294 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 01:03:51.721309 | orchestrator | Monday 05 January 2026 01:01:39 +0000 (0:00:02.482) 0:00:08.778 ******** 2026-01-05 01:03:51.721324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:03:51.721339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:03:51.721353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:03:51.721368 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.721382 | orchestrator | 2026-01-05 01:03:51.721486 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 01:03:51.721504 | orchestrator | Monday 05 January 2026 01:01:40 +0000 (0:00:00.787) 0:00:09.565 ******** 2026-01-05 01:03:51.721522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.721541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.721555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.721570 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.721612 | orchestrator | 2026-01-05 01:03:51.721627 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 01:03:51.721643 | orchestrator | Monday 05 January 2026 01:01:41 +0000 (0:00:00.973) 0:00:10.538 ******** 2026-01-05 01:03:51.721662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.721680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.721695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.721709 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.721725 | orchestrator | 2026-01-05 01:03:51.721756 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 01:03:51.721772 | orchestrator | Monday 05 January 2026 01:01:42 +0000 (0:00:00.413) 0:00:10.952 ******** 2026-01-05 01:03:51.721801 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a2363174708', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 01:01:38.264338', 'end': '2026-01-05 01:01:38.322648', 'delta': '0:00:00.058310', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a2363174708'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-05 01:03:51.721822 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a41b77b7d8ab', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 01:01:39.236565', 'end': '2026-01-05 01:01:39.273029', 'delta': '0:00:00.036464', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a41b77b7d8ab'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-05 01:03:51.721894 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9dc59ec86845', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 01:01:39.808132', 'end': '2026-01-05 01:01:39.849887', 'delta': '0:00:00.041755', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc59ec86845'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-05 01:03:51.721914 | orchestrator | 2026-01-05 01:03:51.721930 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 01:03:51.721946 | orchestrator | Monday 05 January 2026 01:01:42 +0000 (0:00:00.221) 0:00:11.173 ******** 2026-01-05 01:03:51.721961 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.721977 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.721992 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.722007 | orchestrator | 2026-01-05 01:03:51.722068 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 01:03:51.722084 | orchestrator | Monday 05 January 2026 01:01:42 +0000 (0:00:00.445) 0:00:11.618 ******** 2026-01-05 01:03:51.722099 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-05 01:03:51.722114 | orchestrator | 2026-01-05 01:03:51.722128 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 01:03:51.722143 | orchestrator | Monday 05 January 2026 01:01:44 +0000 (0:00:01.827) 0:00:13.446 ******** 2026-01-05 01:03:51.722182 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722211 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722226 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722241 | orchestrator | 2026-01-05 01:03:51.722257 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 01:03:51.722273 | orchestrator | Monday 05 January 2026 01:01:44 +0000 (0:00:00.312) 0:00:13.759 ******** 2026-01-05 01:03:51.722288 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722318 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722333 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722349 | orchestrator | 2026-01-05 01:03:51.722365 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:03:51.722381 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:00.451) 0:00:14.211 ******** 2026-01-05 01:03:51.722397 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722413 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722429 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722444 | orchestrator | 2026-01-05 01:03:51.722459 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 01:03:51.722475 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:00.558) 0:00:14.770 ******** 2026-01-05 01:03:51.722491 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.722506 | orchestrator | 2026-01-05 01:03:51.722524 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 01:03:51.722539 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:00.149) 0:00:14.919 ******** 2026-01-05 01:03:51.722555 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722570 | orchestrator | 2026-01-05 01:03:51.722615 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:03:51.722632 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:00.251) 0:00:15.171 ******** 2026-01-05 01:03:51.722661 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722673 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722682 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722692 | orchestrator | 2026-01-05 01:03:51.722702 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 01:03:51.722712 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:00.263) 0:00:15.435 ******** 2026-01-05 01:03:51.722721 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722731 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722741 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722750 | orchestrator | 2026-01-05 01:03:51.722760 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 01:03:51.722770 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:00.291) 0:00:15.727 ******** 2026-01-05 01:03:51.722780 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722789 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722799 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722809 | orchestrator | 2026-01-05 01:03:51.722818 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 01:03:51.722828 | orchestrator | Monday 05 January 2026 01:01:47 +0000 (0:00:00.444) 0:00:16.172 ******** 2026-01-05 01:03:51.722837 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722847 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722857 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722866 | orchestrator | 2026-01-05 01:03:51.722876 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 01:03:51.722885 | orchestrator | Monday 05 January 2026 01:01:47 +0000 (0:00:00.317) 0:00:16.489 ******** 2026-01-05 01:03:51.722893 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722901 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722909 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.722917 | orchestrator | 2026-01-05 01:03:51.722925 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 01:03:51.722933 | orchestrator | Monday 05 January 2026 01:01:48 +0000 (0:00:00.289) 0:00:16.779 ******** 2026-01-05 01:03:51.722941 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.722949 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.722956 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.723014 | orchestrator | 2026-01-05 01:03:51.723030 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 01:03:51.723062 | orchestrator | Monday 05 January 2026 01:01:48 +0000 (0:00:00.285) 0:00:17.064 ******** 2026-01-05 01:03:51.723078 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.723092 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.723105 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.723118 | orchestrator | 2026-01-05 01:03:51.723130 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 01:03:51.723143 | orchestrator | Monday 05 January 2026 01:01:48 +0000 (0:00:00.485) 0:00:17.549 ******** 2026-01-05 01:03:51.723158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870', 'dm-uuid-LVM-MRS6l1IAkKZkcgde5V97M1EMcnMqW3KrWMak6G2cCTR1eTmdrPCzGKQ7dp26Sw0L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d', 'dm-uuid-LVM-dMSf1iDZpYOiEcelFI9OhV4BqXMF9J3XuaegpFaqFBpSVeWjMCdZGLJXaFwDWJkJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LElmMj-QxHX-v7CL-WeUG-BWYV-FdPv-dF20Gl', 'scsi-0QEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e', 'scsi-SQEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xGBT5x-8Tbz-PsiS-It5s-MMN8-JZB0-adaZAB', 'scsi-0QEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c', 'scsi-SQEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf', 'dm-uuid-LVM-Y1ILTfcYxwsemW78hlDn0ywfi8DN4JXxhHxIRulY0sc7u2rAebOgnUYbiPFpUItE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725', 'scsi-SQEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0', 'dm-uuid-LVM-GYepXQFoGtbQElW2LEnFOoJC2SC8ItgfMcQViTHK0hYiatEG3Gclkza6tpiTXAMc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723625 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.723633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdmmZx-gddZ-3NQk-p78B-1iPq-ZrZ7-RfMK3x', 'scsi-0QEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a', 'scsi-SQEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c3mc6Y-izxE-ZkGV-iJVS-rMd1-Ah2v-MsRqAm', 'scsi-0QEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d', 'scsi-SQEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac', 'dm-uuid-LVM-NJJ3mj0110hGanpgAn0DfkDe3aCEbZl6SsBfXOJX0Fmboc6CeLEDMr6ptd0ICwRT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051', 'scsi-SQEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92', 'dm-uuid-LVM-Uy1gt3vDGof4bxOmSu3qFRdyPeKP9BsyAft6rhxnraj1pJ9uZtmBjigQE0gTXBC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723789 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.723797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:03:51.723884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2RR5of-j2i6-Eldl-JMfj-d8cv-dWlx-QICqMn', 'scsi-0QEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da', 'scsi-SQEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvzYZd-l3rJ-Ej6t-6vq8-YsXl-wCLG-UHGvYS', 'scsi-0QEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf', 'scsi-SQEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b', 'scsi-SQEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:03:51.723981 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.723995 | orchestrator | 2026-01-05 01:03:51.724010 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 01:03:51.724024 | orchestrator | Monday 05 January 2026 01:01:49 +0000 (0:00:00.546) 0:00:18.096 ******** 2026-01-05 01:03:51.724038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870', 'dm-uuid-LVM-MRS6l1IAkKZkcgde5V97M1EMcnMqW3KrWMak6G2cCTR1eTmdrPCzGKQ7dp26Sw0L'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d', 'dm-uuid-LVM-dMSf1iDZpYOiEcelFI9OhV4BqXMF9J3XuaegpFaqFBpSVeWjMCdZGLJXaFwDWJkJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf', 'dm-uuid-LVM-Y1ILTfcYxwsemW78hlDn0ywfi8DN4JXxhHxIRulY0sc7u2rAebOgnUYbiPFpUItE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724213 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0', 'dm-uuid-LVM-GYepXQFoGtbQElW2LEnFOoJC2SC8ItgfMcQViTHK0hYiatEG3Gclkza6tpiTXAMc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9814992-acb0-4fb6-b869-372bf4d7de3f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5dd43ce6--96bd--500c--b036--3c9652e3f870-osd--block--5dd43ce6--96bd--500c--b036--3c9652e3f870'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LElmMj-QxHX-v7CL-WeUG-BWYV-FdPv-dF20Gl', 'scsi-0QEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e', 'scsi-SQEMU_QEMU_HARDDISK_40600621-aef8-490d-8855-2a618a83589e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6f45f623--6f4a--59be--980f--23e900ac5d1d-osd--block--6f45f623--6f4a--59be--980f--23e900ac5d1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xGBT5x-8Tbz-PsiS-It5s-MMN8-JZB0-adaZAB', 'scsi-0QEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c', 'scsi-SQEMU_QEMU_HARDDISK_423e4112-2158-480f-994d-106730fe425c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725', 'scsi-SQEMU_QEMU_HARDDISK_177f10be-5bcc-4fc5-a906-9c9dfc4c0725'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724492 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.724500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724519 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16', 'scsi-SQEMU_QEMU_HARDDISK_f65865d2-fa4a-4078-a136-ae0091ff8f64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bd4e3544--7c7e--58ac--a4cc--590b648d75bf-osd--block--bd4e3544--7c7e--58ac--a4cc--590b648d75bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdmmZx-gddZ-3NQk-p78B-1iPq-ZrZ7-RfMK3x', 'scsi-0QEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a', 'scsi-SQEMU_QEMU_HARDDISK_faa0d012-340f-4cbd-a064-876345a11d6a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac', 'dm-uuid-LVM-NJJ3mj0110hGanpgAn0DfkDe3aCEbZl6SsBfXOJX0Fmboc6CeLEDMr6ptd0ICwRT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35e03706--0bf5--5720--bc24--6001f60a2be0-osd--block--35e03706--0bf5--5720--bc24--6001f60a2be0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c3mc6Y-izxE-ZkGV-iJVS-rMd1-Ah2v-MsRqAm', 'scsi-0QEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d', 'scsi-SQEMU_QEMU_HARDDISK_79f451b0-665e-4ae6-bc28-e4c9d18e1f8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92', 'dm-uuid-LVM-Uy1gt3vDGof4bxOmSu3qFRdyPeKP9BsyAft6rhxnraj1pJ9uZtmBjigQE0gTXBC3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051', 'scsi-SQEMU_QEMU_HARDDISK_165d58d7-2860-4843-bbd3-8318e20b6051'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724830 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724853 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.724862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16', 'scsi-SQEMU_QEMU_HARDDISK_9600cb02-fd9e-4a41-92d8-08e734250305-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f2726894--ebb3--5d48--8b2e--e077f444c4ac-osd--block--f2726894--ebb3--5d48--8b2e--e077f444c4ac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2RR5of-j2i6-Eldl-JMfj-d8cv-dWlx-QICqMn', 'scsi-0QEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da', 'scsi-SQEMU_QEMU_HARDDISK_23055056-069f-450b-aeeb-5eb50c3216da'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--edc09b40--6ec9--59c0--95b4--baacc31b5a92-osd--block--edc09b40--6ec9--59c0--95b4--baacc31b5a92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nvzYZd-l3rJ-Ej6t-6vq8-YsXl-wCLG-UHGvYS', 'scsi-0QEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf', 'scsi-SQEMU_QEMU_HARDDISK_bd2b6514-9bcf-45c0-8865-be606d512acf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b', 'scsi-SQEMU_QEMU_HARDDISK_a447ecf7-81d3-4a74-8944-683d4141cf1b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:03:51.724997 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725006 | orchestrator | 2026-01-05 01:03:51.725014 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 01:03:51.725023 | orchestrator | Monday 05 January 2026 01:01:49 +0000 (0:00:00.559) 0:00:18.655 ******** 2026-01-05 01:03:51.725031 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.725039 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.725047 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.725055 | orchestrator | 2026-01-05 01:03:51.725063 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 01:03:51.725071 | orchestrator | Monday 05 January 2026 01:01:50 +0000 (0:00:00.738) 0:00:19.394 ******** 2026-01-05 01:03:51.725084 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.725092 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.725100 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.725107 | orchestrator | 2026-01-05 01:03:51.725116 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:03:51.725123 | orchestrator | Monday 05 January 2026 01:01:51 +0000 (0:00:00.441) 0:00:19.835 ******** 2026-01-05 01:03:51.725131 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.725139 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.725147 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.725155 | orchestrator | 2026-01-05 01:03:51.725163 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:03:51.725170 | orchestrator | Monday 05 January 2026 01:01:51 +0000 (0:00:00.646) 0:00:20.482 ******** 2026-01-05 01:03:51.725178 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725186 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725194 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725202 | orchestrator | 2026-01-05 01:03:51.725210 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:03:51.725218 | orchestrator | Monday 05 January 2026 01:01:52 +0000 (0:00:00.300) 0:00:20.782 ******** 2026-01-05 01:03:51.725226 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725234 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725242 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725249 | orchestrator | 2026-01-05 01:03:51.725257 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:03:51.725265 | orchestrator | Monday 05 January 2026 01:01:52 +0000 (0:00:00.389) 0:00:21.172 ******** 2026-01-05 01:03:51.725273 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725281 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725289 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725297 | orchestrator | 2026-01-05 01:03:51.725305 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 01:03:51.725312 | orchestrator | Monday 05 January 2026 01:01:52 +0000 (0:00:00.432) 0:00:21.605 ******** 2026-01-05 01:03:51.725320 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 01:03:51.725329 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 01:03:51.725336 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 01:03:51.725344 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 01:03:51.725352 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 01:03:51.725360 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 01:03:51.725367 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 01:03:51.725380 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 01:03:51.725388 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 01:03:51.725396 | orchestrator | 2026-01-05 01:03:51.725404 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 01:03:51.725411 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.765) 0:00:22.370 ******** 2026-01-05 01:03:51.725419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:03:51.725427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:03:51.725435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:03:51.725443 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 01:03:51.725459 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 01:03:51.725467 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 01:03:51.725475 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 01:03:51.725490 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 01:03:51.725507 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 01:03:51.725515 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725523 | orchestrator | 2026-01-05 01:03:51.725531 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 01:03:51.725539 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.358) 0:00:22.728 ******** 2026-01-05 01:03:51.725547 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:03:51.725555 | orchestrator | 2026-01-05 01:03:51.725563 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 01:03:51.725598 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.688) 0:00:23.417 ******** 2026-01-05 01:03:51.725612 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725620 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725628 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725636 | orchestrator | 2026-01-05 01:03:51.725644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 01:03:51.725652 | orchestrator | Monday 05 January 2026 01:01:54 +0000 (0:00:00.292) 0:00:23.710 ******** 2026-01-05 01:03:51.725660 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725668 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725675 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725683 | orchestrator | 2026-01-05 01:03:51.725691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 01:03:51.725699 | orchestrator | Monday 05 January 2026 01:01:55 +0000 (0:00:00.316) 0:00:24.027 ******** 2026-01-05 01:03:51.725707 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725715 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.725723 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:03:51.725731 | orchestrator | 2026-01-05 01:03:51.725739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 01:03:51.725747 | orchestrator | Monday 05 January 2026 01:01:55 +0000 (0:00:00.318) 0:00:24.346 ******** 2026-01-05 01:03:51.725755 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.725763 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.725771 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.725779 | orchestrator | 2026-01-05 01:03:51.725786 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 01:03:51.725794 | orchestrator | Monday 05 January 2026 01:01:56 +0000 (0:00:00.769) 0:00:25.115 ******** 2026-01-05 01:03:51.725802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:03:51.725810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:03:51.725818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:03:51.725826 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725833 | orchestrator | 2026-01-05 01:03:51.725841 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 01:03:51.725849 | orchestrator | Monday 05 January 2026 01:01:56 +0000 (0:00:00.339) 0:00:25.455 ******** 2026-01-05 01:03:51.725857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:03:51.725865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:03:51.725873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:03:51.725881 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725888 | orchestrator | 2026-01-05 01:03:51.725896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 01:03:51.725904 | orchestrator | Monday 05 January 2026 01:01:57 +0000 (0:00:00.362) 0:00:25.817 ******** 2026-01-05 01:03:51.725912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:03:51.725920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:03:51.725928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:03:51.725942 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.725950 | orchestrator | 2026-01-05 01:03:51.725958 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 01:03:51.725965 | orchestrator | Monday 05 January 2026 01:01:57 +0000 (0:00:00.335) 0:00:26.152 ******** 2026-01-05 01:03:51.725973 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:03:51.725981 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:03:51.725989 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:03:51.725997 | orchestrator | 2026-01-05 01:03:51.726005 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 01:03:51.726066 | orchestrator | Monday 05 January 2026 01:01:57 +0000 (0:00:00.290) 0:00:26.443 ******** 2026-01-05 01:03:51.726077 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:03:51.726085 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 01:03:51.726101 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 01:03:51.726109 | orchestrator | 2026-01-05 01:03:51.726117 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 01:03:51.726125 | orchestrator | Monday 05 January 2026 01:01:58 +0000 (0:00:00.442) 0:00:26.885 ******** 2026-01-05 01:03:51.726133 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:03:51.726153 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:03:51.726170 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:03:51.726178 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:03:51.726186 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:03:51.726194 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:03:51.726202 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:03:51.726210 | orchestrator | 2026-01-05 01:03:51.726218 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 01:03:51.726226 | orchestrator | Monday 05 January 2026 01:01:58 +0000 (0:00:00.873) 0:00:27.759 ******** 2026-01-05 01:03:51.726234 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:03:51.726242 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:03:51.726250 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:03:51.726258 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:03:51.726266 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:03:51.726274 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:03:51.726288 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:03:51.726296 | orchestrator | 2026-01-05 01:03:51.726304 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-05 01:03:51.726312 | orchestrator | Monday 05 January 2026 01:02:00 +0000 (0:00:01.832) 0:00:29.591 ******** 2026-01-05 01:03:51.726320 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:03:51.726328 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:03:51.726335 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-05 01:03:51.726343 | orchestrator | 2026-01-05 01:03:51.726351 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-05 01:03:51.726359 | orchestrator | Monday 05 January 2026 01:02:01 +0000 (0:00:00.372) 0:00:29.964 ******** 2026-01-05 01:03:51.726369 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:03:51.726386 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:03:51.726394 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:03:51.726403 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:03:51.726411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:03:51.726419 | orchestrator | 2026-01-05 01:03:51.726427 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-05 01:03:51.726435 | orchestrator | Monday 05 January 2026 01:02:47 +0000 (0:00:46.070) 0:01:16.034 ******** 2026-01-05 01:03:51.726443 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726459 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726467 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726475 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726487 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726495 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-05 01:03:51.726503 | orchestrator | 2026-01-05 01:03:51.726511 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-05 01:03:51.726519 | orchestrator | Monday 05 January 2026 01:03:14 +0000 (0:00:27.494) 0:01:43.529 ******** 2026-01-05 01:03:51.726527 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726535 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726542 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726550 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726558 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726566 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726589 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:03:51.726597 | orchestrator | 2026-01-05 01:03:51.726605 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-05 01:03:51.726613 | orchestrator | Monday 05 January 2026 01:03:28 +0000 (0:00:13.992) 0:01:57.521 ******** 2026-01-05 01:03:51.726621 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726629 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:03:51.726637 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:03:51.726644 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726658 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:03:51.726673 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:03:51.726681 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726689 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:03:51.726697 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:03:51.726705 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726713 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:03:51.726721 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:03:51.726728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726736 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:03:51.726744 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:03:51.726752 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:03:51.726760 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:03:51.726768 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:03:51.726776 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-05 01:03:51.726784 | orchestrator | 2026-01-05 01:03:51.726792 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:03:51.726800 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 01:03:51.726809 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 01:03:51.726817 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 01:03:51.726825 | orchestrator | 2026-01-05 01:03:51.726833 | orchestrator | 2026-01-05 01:03:51.726841 | orchestrator | 2026-01-05 01:03:51.726849 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:03:51.726857 | orchestrator | Monday 05 January 2026 01:03:48 +0000 (0:00:20.091) 0:02:17.613 ******** 2026-01-05 01:03:51.726864 | orchestrator | =============================================================================== 2026-01-05 01:03:51.726873 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.07s 2026-01-05 01:03:51.726880 | orchestrator | generate keys ---------------------------------------------------------- 27.49s 2026-01-05 01:03:51.726888 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 20.09s 2026-01-05 01:03:51.726896 | orchestrator | get keys from monitors ------------------------------------------------- 13.99s 2026-01-05 01:03:51.726904 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.48s 2026-01-05 01:03:51.726912 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.83s 2026-01-05 01:03:51.726920 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.83s 2026-01-05 01:03:51.726928 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.97s 2026-01-05 01:03:51.726940 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-01-05 01:03:51.726949 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.87s 2026-01-05 01:03:51.726957 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.79s 2026-01-05 01:03:51.726964 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.77s 2026-01-05 01:03:51.726980 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.77s 2026-01-05 01:03:51.726988 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.74s 2026-01-05 01:03:51.726996 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2026-01-05 01:03:51.727004 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.69s 2026-01-05 01:03:51.727011 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-01-05 01:03:51.727019 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2026-01-05 01:03:51.727027 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-01-05 01:03:51.727035 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.57s 2026-01-05 01:03:51.727043 | orchestrator | 2026-01-05 01:03:51 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:51.727051 | orchestrator | 2026-01-05 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:54.789826 | orchestrator | 2026-01-05 01:03:54 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:03:54.791676 | orchestrator | 2026-01-05 01:03:54 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:03:54.793070 | orchestrator | 2026-01-05 01:03:54 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:03:54.796155 | orchestrator | 2026-01-05 01:03:54 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:03:54.798912 | orchestrator | 2026-01-05 01:03:54 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:54.798944 | orchestrator | 2026-01-05 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:03:57.849294 | orchestrator | 2026-01-05 01:03:57 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:03:57.851718 | orchestrator | 2026-01-05 01:03:57 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:03:57.854327 | orchestrator | 2026-01-05 01:03:57 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:03:57.856925 | orchestrator | 2026-01-05 01:03:57 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:03:57.859145 | orchestrator | 2026-01-05 01:03:57 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:03:57.859194 | orchestrator | 2026-01-05 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:00.908195 | orchestrator | 2026-01-05 01:04:00 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:00.911522 | orchestrator | 2026-01-05 01:04:00 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:00.914617 | orchestrator | 2026-01-05 01:04:00 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:00.917485 | orchestrator | 2026-01-05 01:04:00 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:00.919828 | orchestrator | 2026-01-05 01:04:00 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:00.919929 | orchestrator | 2026-01-05 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:03.964266 | orchestrator | 2026-01-05 01:04:03 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:03.965301 | orchestrator | 2026-01-05 01:04:03 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:03.966820 | orchestrator | 2026-01-05 01:04:03 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:03.968455 | orchestrator | 2026-01-05 01:04:03 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:03.970500 | orchestrator | 2026-01-05 01:04:03 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:03.970524 | orchestrator | 2026-01-05 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:07.029211 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:07.032317 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:07.035153 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:07.036936 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:07.039306 | orchestrator | 2026-01-05 01:04:07 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:07.039393 | orchestrator | 2026-01-05 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:10.129042 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:10.132026 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:10.135747 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:10.139333 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:10.142783 | orchestrator | 2026-01-05 01:04:10 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:10.142875 | orchestrator | 2026-01-05 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:13.199484 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:13.199828 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:13.201762 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:13.203822 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:13.205784 | orchestrator | 2026-01-05 01:04:13 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:13.206086 | orchestrator | 2026-01-05 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:16.253153 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:16.255816 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:16.258399 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:16.261307 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:16.263701 | orchestrator | 2026-01-05 01:04:16 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:16.264001 | orchestrator | 2026-01-05 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:19.310471 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:19.312171 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:19.313480 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:19.314871 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:19.319988 | orchestrator | 2026-01-05 01:04:19 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:19.320072 | orchestrator | 2026-01-05 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:22.368009 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:22.369318 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:22.371163 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:22.373568 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:22.374364 | orchestrator | 2026-01-05 01:04:22 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:22.374711 | orchestrator | 2026-01-05 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:25.422720 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:25.425271 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:25.428398 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:25.430919 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:25.432249 | orchestrator | 2026-01-05 01:04:25 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state STARTED 2026-01-05 01:04:25.432326 | orchestrator | 2026-01-05 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:28.484759 | orchestrator | 2026-01-05 01:04:28 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:28.488032 | orchestrator | 2026-01-05 01:04:28 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:28.492314 | orchestrator | 2026-01-05 01:04:28 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state STARTED 2026-01-05 01:04:28.493181 | orchestrator | 2026-01-05 01:04:28 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:28.495723 | orchestrator | 2026-01-05 01:04:28 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:28.497837 | orchestrator | 2026-01-05 01:04:28 | INFO  | Task 00c00486-57c0-4099-83e8-aed474fba234 is in state SUCCESS 2026-01-05 01:04:28.499762 | orchestrator | 2026-01-05 01:04:28.499796 | orchestrator | 2026-01-05 01:04:28.499802 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:04:28.499810 | orchestrator | 2026-01-05 01:04:28.499817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:04:28.499826 | orchestrator | Monday 05 January 2026 01:02:34 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-01-05 01:04:28.499834 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.499843 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.499851 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.499881 | orchestrator | 2026-01-05 01:04:28.499887 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:04:28.499892 | orchestrator | Monday 05 January 2026 01:02:34 +0000 (0:00:00.309) 0:00:00.575 ******** 2026-01-05 01:04:28.499897 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-05 01:04:28.499902 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-05 01:04:28.499907 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-05 01:04:28.499912 | orchestrator | 2026-01-05 01:04:28.499916 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-05 01:04:28.499921 | orchestrator | 2026-01-05 01:04:28.499925 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:04:28.499972 | orchestrator | Monday 05 January 2026 01:02:34 +0000 (0:00:00.466) 0:00:01.041 ******** 2026-01-05 01:04:28.499978 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:04:28.499985 | orchestrator | 2026-01-05 01:04:28.499989 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-05 01:04:28.499994 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:00.511) 0:00:01.553 ******** 2026-01-05 01:04:28.500015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.500034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.500050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.500055 | orchestrator | 2026-01-05 01:04:28.500060 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-05 01:04:28.500065 | orchestrator | Monday 05 January 2026 01:02:36 +0000 (0:00:01.186) 0:00:02.739 ******** 2026-01-05 01:04:28.500070 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500074 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500084 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500088 | orchestrator | 2026-01-05 01:04:28.500093 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:04:28.500098 | orchestrator | Monday 05 January 2026 01:02:36 +0000 (0:00:00.481) 0:00:03.220 ******** 2026-01-05 01:04:28.500102 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:04:28.500111 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:04:28.500116 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:04:28.500120 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:04:28.500125 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:04:28.500130 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:04:28.500134 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:04:28.500139 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:04:28.500143 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:04:28.500148 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:04:28.500153 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:04:28.500157 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:04:28.500162 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:04:28.500166 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:04:28.500171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:04:28.500176 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:04:28.500180 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:04:28.500185 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:04:28.500190 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:04:28.500194 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:04:28.500199 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:04:28.500203 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:04:28.500208 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:04:28.500212 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:04:28.500218 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-05 01:04:28.500226 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-05 01:04:28.500230 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-05 01:04:28.500235 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-05 01:04:28.500336 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-05 01:04:28.500348 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-05 01:04:28.500353 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-05 01:04:28.500357 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-05 01:04:28.500362 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-05 01:04:28.500367 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-05 01:04:28.500372 | orchestrator | 2026-01-05 01:04:28.500376 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500381 | orchestrator | Monday 05 January 2026 01:02:37 +0000 (0:00:00.772) 0:00:03.993 ******** 2026-01-05 01:04:28.500386 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500390 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500395 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500400 | orchestrator | 2026-01-05 01:04:28.500404 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500409 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:00.315) 0:00:04.309 ******** 2026-01-05 01:04:28.500414 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500419 | orchestrator | 2026-01-05 01:04:28.500427 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500431 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:00.120) 0:00:04.430 ******** 2026-01-05 01:04:28.500436 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500441 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500446 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500450 | orchestrator | 2026-01-05 01:04:28.500455 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500459 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:00.497) 0:00:04.927 ******** 2026-01-05 01:04:28.500464 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500468 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500473 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500477 | orchestrator | 2026-01-05 01:04:28.500482 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500487 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:00.315) 0:00:05.242 ******** 2026-01-05 01:04:28.500491 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500496 | orchestrator | 2026-01-05 01:04:28.500500 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500505 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.148) 0:00:05.391 ******** 2026-01-05 01:04:28.500529 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500536 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500541 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500546 | orchestrator | 2026-01-05 01:04:28.500550 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500555 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.333) 0:00:05.725 ******** 2026-01-05 01:04:28.500559 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500564 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500568 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500573 | orchestrator | 2026-01-05 01:04:28.500577 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500582 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.367) 0:00:06.092 ******** 2026-01-05 01:04:28.500587 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500595 | orchestrator | 2026-01-05 01:04:28.500599 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500604 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.156) 0:00:06.249 ******** 2026-01-05 01:04:28.500609 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500613 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500618 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500622 | orchestrator | 2026-01-05 01:04:28.500627 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500632 | orchestrator | Monday 05 January 2026 01:02:40 +0000 (0:00:00.552) 0:00:06.801 ******** 2026-01-05 01:04:28.500636 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500641 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500645 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500650 | orchestrator | 2026-01-05 01:04:28.500654 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500659 | orchestrator | Monday 05 January 2026 01:02:40 +0000 (0:00:00.340) 0:00:07.142 ******** 2026-01-05 01:04:28.500664 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500668 | orchestrator | 2026-01-05 01:04:28.500673 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500677 | orchestrator | Monday 05 January 2026 01:02:41 +0000 (0:00:00.146) 0:00:07.289 ******** 2026-01-05 01:04:28.500682 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500686 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500691 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500696 | orchestrator | 2026-01-05 01:04:28.500700 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500705 | orchestrator | Monday 05 January 2026 01:02:41 +0000 (0:00:00.298) 0:00:07.587 ******** 2026-01-05 01:04:28.500709 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500714 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500718 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500723 | orchestrator | 2026-01-05 01:04:28.500730 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500735 | orchestrator | Monday 05 January 2026 01:02:41 +0000 (0:00:00.543) 0:00:08.131 ******** 2026-01-05 01:04:28.500740 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500744 | orchestrator | 2026-01-05 01:04:28.500749 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500754 | orchestrator | Monday 05 January 2026 01:02:42 +0000 (0:00:00.125) 0:00:08.256 ******** 2026-01-05 01:04:28.500758 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500763 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500767 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500772 | orchestrator | 2026-01-05 01:04:28.500776 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500781 | orchestrator | Monday 05 January 2026 01:02:42 +0000 (0:00:00.303) 0:00:08.559 ******** 2026-01-05 01:04:28.500785 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500790 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500795 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500799 | orchestrator | 2026-01-05 01:04:28.500804 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500808 | orchestrator | Monday 05 January 2026 01:02:42 +0000 (0:00:00.344) 0:00:08.904 ******** 2026-01-05 01:04:28.500813 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500817 | orchestrator | 2026-01-05 01:04:28.500822 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500827 | orchestrator | Monday 05 January 2026 01:02:42 +0000 (0:00:00.142) 0:00:09.047 ******** 2026-01-05 01:04:28.500831 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500836 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500840 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500845 | orchestrator | 2026-01-05 01:04:28.500853 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500861 | orchestrator | Monday 05 January 2026 01:02:43 +0000 (0:00:00.330) 0:00:09.378 ******** 2026-01-05 01:04:28.500865 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500870 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500875 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500879 | orchestrator | 2026-01-05 01:04:28.500884 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500888 | orchestrator | Monday 05 January 2026 01:02:43 +0000 (0:00:00.602) 0:00:09.980 ******** 2026-01-05 01:04:28.500893 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500916 | orchestrator | 2026-01-05 01:04:28.500921 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.500926 | orchestrator | Monday 05 January 2026 01:02:43 +0000 (0:00:00.138) 0:00:10.119 ******** 2026-01-05 01:04:28.500930 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500935 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.500939 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.500944 | orchestrator | 2026-01-05 01:04:28.500949 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.500953 | orchestrator | Monday 05 January 2026 01:02:44 +0000 (0:00:00.308) 0:00:10.428 ******** 2026-01-05 01:04:28.500958 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.500962 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.500967 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.500973 | orchestrator | 2026-01-05 01:04:28.500979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.500984 | orchestrator | Monday 05 January 2026 01:02:44 +0000 (0:00:00.325) 0:00:10.753 ******** 2026-01-05 01:04:28.500989 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.500995 | orchestrator | 2026-01-05 01:04:28.501000 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.501005 | orchestrator | Monday 05 January 2026 01:02:44 +0000 (0:00:00.138) 0:00:10.891 ******** 2026-01-05 01:04:28.501011 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501016 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501022 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501027 | orchestrator | 2026-01-05 01:04:28.501032 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.501037 | orchestrator | Monday 05 January 2026 01:02:44 +0000 (0:00:00.278) 0:00:11.170 ******** 2026-01-05 01:04:28.501042 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.501046 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.501051 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.501055 | orchestrator | 2026-01-05 01:04:28.501060 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.501065 | orchestrator | Monday 05 January 2026 01:02:45 +0000 (0:00:00.643) 0:00:11.814 ******** 2026-01-05 01:04:28.501069 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501074 | orchestrator | 2026-01-05 01:04:28.501078 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.501083 | orchestrator | Monday 05 January 2026 01:02:45 +0000 (0:00:00.151) 0:00:11.965 ******** 2026-01-05 01:04:28.501087 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501092 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501096 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501101 | orchestrator | 2026-01-05 01:04:28.501106 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:04:28.501110 | orchestrator | Monday 05 January 2026 01:02:46 +0000 (0:00:00.331) 0:00:12.297 ******** 2026-01-05 01:04:28.501115 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:28.501119 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:28.501124 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:28.501128 | orchestrator | 2026-01-05 01:04:28.501133 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:04:28.501142 | orchestrator | Monday 05 January 2026 01:02:46 +0000 (0:00:00.322) 0:00:12.619 ******** 2026-01-05 01:04:28.501147 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501151 | orchestrator | 2026-01-05 01:04:28.501156 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:04:28.501161 | orchestrator | Monday 05 January 2026 01:02:46 +0000 (0:00:00.154) 0:00:12.773 ******** 2026-01-05 01:04:28.501165 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501170 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501177 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501182 | orchestrator | 2026-01-05 01:04:28.501187 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-05 01:04:28.501191 | orchestrator | Monday 05 January 2026 01:02:47 +0000 (0:00:00.496) 0:00:13.270 ******** 2026-01-05 01:04:28.501196 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:28.501200 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:28.501205 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:28.501209 | orchestrator | 2026-01-05 01:04:28.501214 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-05 01:04:28.501219 | orchestrator | Monday 05 January 2026 01:02:48 +0000 (0:00:01.754) 0:00:15.025 ******** 2026-01-05 01:04:28.501223 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:04:28.501228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:04:28.501233 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:04:28.501237 | orchestrator | 2026-01-05 01:04:28.501242 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-05 01:04:28.501247 | orchestrator | Monday 05 January 2026 01:02:50 +0000 (0:00:02.220) 0:00:17.246 ******** 2026-01-05 01:04:28.501251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:04:28.501256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:04:28.501261 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:04:28.501266 | orchestrator | 2026-01-05 01:04:28.501270 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-05 01:04:28.501278 | orchestrator | Monday 05 January 2026 01:02:53 +0000 (0:00:02.685) 0:00:19.932 ******** 2026-01-05 01:04:28.501283 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:04:28.501288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:04:28.501292 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:04:28.501297 | orchestrator | 2026-01-05 01:04:28.501301 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-05 01:04:28.501306 | orchestrator | Monday 05 January 2026 01:02:55 +0000 (0:00:02.181) 0:00:22.114 ******** 2026-01-05 01:04:28.501310 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501315 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501320 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501324 | orchestrator | 2026-01-05 01:04:28.501329 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-05 01:04:28.501333 | orchestrator | Monday 05 January 2026 01:02:56 +0000 (0:00:00.334) 0:00:22.448 ******** 2026-01-05 01:04:28.501338 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501342 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501347 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501351 | orchestrator | 2026-01-05 01:04:28.501356 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:04:28.501402 | orchestrator | Monday 05 January 2026 01:02:56 +0000 (0:00:00.310) 0:00:22.759 ******** 2026-01-05 01:04:28.501407 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:04:28.501412 | orchestrator | 2026-01-05 01:04:28.501417 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-05 01:04:28.501421 | orchestrator | Monday 05 January 2026 01:02:57 +0000 (0:00:00.833) 0:00:23.592 ******** 2026-01-05 01:04:28.501431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.501442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.501458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.501464 | orchestrator | 2026-01-05 01:04:28.501468 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-05 01:04:28.501473 | orchestrator | Monday 05 January 2026 01:02:59 +0000 (0:00:01.779) 0:00:25.372 ******** 2026-01-05 01:04:28.501482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:04:28.501491 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:04:28.501545 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:04:28.501561 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501566 | orchestrator | 2026-01-05 01:04:28.501571 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-05 01:04:28.501576 | orchestrator | Monday 05 January 2026 01:02:59 +0000 (0:00:00.657) 0:00:26.029 ******** 2026-01-05 01:04:28.501589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:04:28.501599 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:04:28.501612 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:04:28.501637 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501646 | orchestrator | 2026-01-05 01:04:28.501656 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-05 01:04:28.501663 | orchestrator | Monday 05 January 2026 01:03:00 +0000 (0:00:00.826) 0:00:26.856 ******** 2026-01-05 01:04:28.501675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.501690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.501709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:04:28.501717 | orchestrator | 2026-01-05 01:04:28.501724 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:04:28.501732 | orchestrator | Monday 05 January 2026 01:03:02 +0000 (0:00:01.729) 0:00:28.586 ******** 2026-01-05 01:04:28.501739 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:28.501747 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:28.501754 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:28.501761 | orchestrator | 2026-01-05 01:04:28.501769 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:04:28.501776 | orchestrator | Monday 05 January 2026 01:03:02 +0000 (0:00:00.353) 0:00:28.940 ******** 2026-01-05 01:04:28.501783 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:04:28.501796 | orchestrator | 2026-01-05 01:04:28.501803 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-05 01:04:28.501811 | orchestrator | Monday 05 January 2026 01:03:03 +0000 (0:00:00.638) 0:00:29.579 ******** 2026-01-05 01:04:28.501816 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:28.501820 | orchestrator | 2026-01-05 01:04:28.501825 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-05 01:04:28.501829 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:03.008) 0:00:32.588 ******** 2026-01-05 01:04:28.501834 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:28.501839 | orchestrator | 2026-01-05 01:04:28.501843 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-05 01:04:28.501848 | orchestrator | Monday 05 January 2026 01:03:09 +0000 (0:00:03.257) 0:00:35.845 ******** 2026-01-05 01:04:28.501852 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:28.501857 | orchestrator | 2026-01-05 01:04:28.501861 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:04:28.501866 | orchestrator | Monday 05 January 2026 01:03:28 +0000 (0:00:19.118) 0:00:54.964 ******** 2026-01-05 01:04:28.501871 | orchestrator | 2026-01-05 01:04:28.501875 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:04:28.501880 | orchestrator | Monday 05 January 2026 01:03:28 +0000 (0:00:00.086) 0:00:55.050 ******** 2026-01-05 01:04:28.501884 | orchestrator | 2026-01-05 01:04:28.501889 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:04:28.501893 | orchestrator | Monday 05 January 2026 01:03:28 +0000 (0:00:00.071) 0:00:55.122 ******** 2026-01-05 01:04:28.501898 | orchestrator | 2026-01-05 01:04:28.501903 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-05 01:04:28.501907 | orchestrator | Monday 05 January 2026 01:03:28 +0000 (0:00:00.063) 0:00:55.186 ******** 2026-01-05 01:04:28.501912 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:28.501916 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:28.501921 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:28.501925 | orchestrator | 2026-01-05 01:04:28.501930 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:04:28.501935 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 01:04:28.501940 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 01:04:28.501945 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 01:04:28.501950 | orchestrator | 2026-01-05 01:04:28.501954 | orchestrator | 2026-01-05 01:04:28.501959 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:04:28.501963 | orchestrator | Monday 05 January 2026 01:04:26 +0000 (0:00:57.636) 0:01:52.822 ******** 2026-01-05 01:04:28.501968 | orchestrator | =============================================================================== 2026-01-05 01:04:28.501972 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.64s 2026-01-05 01:04:28.501977 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 19.12s 2026-01-05 01:04:28.501981 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.26s 2026-01-05 01:04:28.501986 | orchestrator | horizon : Creating Horizon database ------------------------------------- 3.01s 2026-01-05 01:04:28.501991 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.69s 2026-01-05 01:04:28.501995 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.22s 2026-01-05 01:04:28.502000 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.18s 2026-01-05 01:04:28.502009 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.78s 2026-01-05 01:04:28.502013 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.75s 2026-01-05 01:04:28.502060 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.73s 2026-01-05 01:04:28.502064 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2026-01-05 01:04:28.502069 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-01-05 01:04:28.502073 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-01-05 01:04:28.502078 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-01-05 01:04:28.502083 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-01-05 01:04:28.502087 | orchestrator | horizon : Update policy file name --------------------------------------- 0.64s 2026-01-05 01:04:28.502092 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-01-05 01:04:28.502096 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-01-05 01:04:28.502101 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-01-05 01:04:28.502105 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-01-05 01:04:28.502110 | orchestrator | 2026-01-05 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:31.554394 | orchestrator | 2026-01-05 01:04:31 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:31.556384 | orchestrator | 2026-01-05 01:04:31 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:31.559700 | orchestrator | 2026-01-05 01:04:31 | INFO  | Task 8a21ed30-2309-4320-bf5c-dd384efaa17e is in state SUCCESS 2026-01-05 01:04:31.561170 | orchestrator | 2026-01-05 01:04:31 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:31.562753 | orchestrator | 2026-01-05 01:04:31 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:31.562831 | orchestrator | 2026-01-05 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:34.628463 | orchestrator | 2026-01-05 01:04:34 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:34.631710 | orchestrator | 2026-01-05 01:04:34 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:34.635585 | orchestrator | 2026-01-05 01:04:34 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:34.638103 | orchestrator | 2026-01-05 01:04:34 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:34.640810 | orchestrator | 2026-01-05 01:04:34 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:34.641600 | orchestrator | 2026-01-05 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:37.690594 | orchestrator | 2026-01-05 01:04:37 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:37.691173 | orchestrator | 2026-01-05 01:04:37 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:37.692341 | orchestrator | 2026-01-05 01:04:37 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:37.693371 | orchestrator | 2026-01-05 01:04:37 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:37.694199 | orchestrator | 2026-01-05 01:04:37 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:37.694260 | orchestrator | 2026-01-05 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:40.753563 | orchestrator | 2026-01-05 01:04:40 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:40.755692 | orchestrator | 2026-01-05 01:04:40 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:40.757471 | orchestrator | 2026-01-05 01:04:40 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:40.759324 | orchestrator | 2026-01-05 01:04:40 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:40.760892 | orchestrator | 2026-01-05 01:04:40 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:40.760951 | orchestrator | 2026-01-05 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:43.826681 | orchestrator | 2026-01-05 01:04:43 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:43.830734 | orchestrator | 2026-01-05 01:04:43 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:43.835240 | orchestrator | 2026-01-05 01:04:43 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:43.838057 | orchestrator | 2026-01-05 01:04:43 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:43.839780 | orchestrator | 2026-01-05 01:04:43 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:43.840281 | orchestrator | 2026-01-05 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:46.920466 | orchestrator | 2026-01-05 01:04:46 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:46.923291 | orchestrator | 2026-01-05 01:04:46 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:46.926306 | orchestrator | 2026-01-05 01:04:46 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:46.928710 | orchestrator | 2026-01-05 01:04:46 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:46.931895 | orchestrator | 2026-01-05 01:04:46 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:46.931955 | orchestrator | 2026-01-05 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:49.976997 | orchestrator | 2026-01-05 01:04:49 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:49.979613 | orchestrator | 2026-01-05 01:04:49 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:49.982371 | orchestrator | 2026-01-05 01:04:49 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:49.985281 | orchestrator | 2026-01-05 01:04:49 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:49.987119 | orchestrator | 2026-01-05 01:04:49 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:49.987178 | orchestrator | 2026-01-05 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:53.032550 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state STARTED 2026-01-05 01:04:53.033051 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:53.034400 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:53.035709 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state STARTED 2026-01-05 01:04:53.038479 | orchestrator | 2026-01-05 01:04:53 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:53.038527 | orchestrator | 2026-01-05 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:56.113533 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task ef4df852-edf7-46c1-b7a4-d31a202b2cd3 is in state SUCCESS 2026-01-05 01:04:56.114271 | orchestrator | 2026-01-05 01:04:56.114308 | orchestrator | 2026-01-05 01:04:56.114317 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-05 01:04:56.114360 | orchestrator | 2026-01-05 01:04:56.114369 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-05 01:04:56.114376 | orchestrator | Monday 05 January 2026 01:03:53 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-01-05 01:04:56.114383 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 01:04:56.114391 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114398 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114404 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:04:56.114410 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114417 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 01:04:56.114423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 01:04:56.114429 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:04:56.114435 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 01:04:56.114441 | orchestrator | 2026-01-05 01:04:56.114448 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-05 01:04:56.114454 | orchestrator | Monday 05 January 2026 01:03:58 +0000 (0:00:05.073) 0:00:05.238 ******** 2026-01-05 01:04:56.114481 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 01:04:56.114504 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114510 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114517 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:04:56.114523 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114533 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 01:04:56.114544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 01:04:56.114578 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:04:56.114588 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 01:04:56.114594 | orchestrator | 2026-01-05 01:04:56.114601 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-05 01:04:56.114612 | orchestrator | Monday 05 January 2026 01:04:03 +0000 (0:00:04.708) 0:00:09.946 ******** 2026-01-05 01:04:56.114624 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 01:04:56.114635 | orchestrator | 2026-01-05 01:04:56.114645 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-05 01:04:56.114655 | orchestrator | Monday 05 January 2026 01:04:04 +0000 (0:00:01.059) 0:00:11.005 ******** 2026-01-05 01:04:56.114687 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-05 01:04:56.114696 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114702 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114708 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:04:56.114714 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114721 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-05 01:04:56.114727 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-05 01:04:56.114733 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:04:56.114739 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-05 01:04:56.114745 | orchestrator | 2026-01-05 01:04:56.114751 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-05 01:04:56.114757 | orchestrator | Monday 05 January 2026 01:04:19 +0000 (0:00:14.645) 0:00:25.651 ******** 2026-01-05 01:04:56.114763 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-05 01:04:56.114770 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-05 01:04:56.114776 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 01:04:56.114782 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 01:04:56.114800 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 01:04:56.114806 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 01:04:56.114812 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-05 01:04:56.114819 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-05 01:04:56.114825 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-05 01:04:56.114831 | orchestrator | 2026-01-05 01:04:56.114837 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-05 01:04:56.114843 | orchestrator | Monday 05 January 2026 01:04:22 +0000 (0:00:03.404) 0:00:29.055 ******** 2026-01-05 01:04:56.114851 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-05 01:04:56.114859 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114866 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114873 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:04:56.114881 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:04:56.114888 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-05 01:04:56.114896 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-05 01:04:56.114903 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:04:56.114910 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-05 01:04:56.114918 | orchestrator | 2026-01-05 01:04:56.114925 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:04:56.114937 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.114946 | orchestrator | 2026-01-05 01:04:56.114953 | orchestrator | 2026-01-05 01:04:56.114967 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:04:56.114974 | orchestrator | Monday 05 January 2026 01:04:30 +0000 (0:00:07.738) 0:00:36.794 ******** 2026-01-05 01:04:56.114981 | orchestrator | =============================================================================== 2026-01-05 01:04:56.114989 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.65s 2026-01-05 01:04:56.114996 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.74s 2026-01-05 01:04:56.115003 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.07s 2026-01-05 01:04:56.115011 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.71s 2026-01-05 01:04:56.115018 | orchestrator | Check if target directories exist --------------------------------------- 3.40s 2026-01-05 01:04:56.115025 | orchestrator | Create share directory -------------------------------------------------- 1.06s 2026-01-05 01:04:56.115032 | orchestrator | 2026-01-05 01:04:56.115040 | orchestrator | 2026-01-05 01:04:56.115047 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:04:56.115055 | orchestrator | 2026-01-05 01:04:56.115062 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:04:56.115069 | orchestrator | Monday 05 January 2026 01:03:44 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-01-05 01:04:56.115077 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:56.115085 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:56.115092 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:56.115099 | orchestrator | 2026-01-05 01:04:56.115106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:04:56.115114 | orchestrator | Monday 05 January 2026 01:03:44 +0000 (0:00:00.334) 0:00:00.639 ******** 2026-01-05 01:04:56.115121 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-05 01:04:56.115129 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-05 01:04:56.115136 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-05 01:04:56.115143 | orchestrator | 2026-01-05 01:04:56.115151 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-05 01:04:56.115158 | orchestrator | 2026-01-05 01:04:56.115166 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:04:56.115173 | orchestrator | Monday 05 January 2026 01:03:45 +0000 (0:00:00.526) 0:00:01.166 ******** 2026-01-05 01:04:56.115180 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:04:56.115188 | orchestrator | 2026-01-05 01:04:56.115195 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-05 01:04:56.115203 | orchestrator | Monday 05 January 2026 01:03:45 +0000 (0:00:00.543) 0:00:01.710 ******** 2026-01-05 01:04:56.115211 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (5 retries left). 2026-01-05 01:04:56.115218 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (4 retries left). 2026-01-05 01:04:56.115224 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (3 retries left). 2026-01-05 01:04:56.115231 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (2 retries left). 2026-01-05 01:04:56.115237 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (1 retries left). 2026-01-05 01:04:56.115272 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767575091.4352682-3272-207828096316429/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767575091.4352682-3272-207828096316429/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767575091.4352682-3272-207828096316429/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_dktkfhh4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_dktkfhh4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_dktkfhh4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_dktkfhh4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_dktkfhh4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:04:56.115294 | orchestrator | 2026-01-05 01:04:56.115300 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:04:56.115306 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.115313 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.115343 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.115354 | orchestrator | 2026-01-05 01:04:56.115366 | orchestrator | 2026-01-05 01:04:56.115378 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:04:56.115389 | orchestrator | Monday 05 January 2026 01:04:52 +0000 (0:01:07.120) 0:01:08.831 ******** 2026-01-05 01:04:56.115400 | orchestrator | =============================================================================== 2026-01-05 01:04:56.115407 | orchestrator | service-ks-register : barbican | Creating services --------------------- 67.12s 2026-01-05 01:04:56.115413 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.54s 2026-01-05 01:04:56.115419 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-01-05 01:04:56.115426 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-05 01:04:56.115938 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state STARTED 2026-01-05 01:04:56.115960 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:56.116220 | orchestrator | 2026-01-05 01:04:56.116234 | orchestrator | 2026-01-05 01:04:56.116240 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:04:56.116247 | orchestrator | 2026-01-05 01:04:56.116253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:04:56.116259 | orchestrator | Monday 05 January 2026 01:03:44 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-01-05 01:04:56.116277 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:56.116291 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:56.116298 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:56.116314 | orchestrator | 2026-01-05 01:04:56.116320 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:04:56.116327 | orchestrator | Monday 05 January 2026 01:03:44 +0000 (0:00:00.317) 0:00:00.598 ******** 2026-01-05 01:04:56.116333 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-05 01:04:56.116340 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-05 01:04:56.116346 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-05 01:04:56.116353 | orchestrator | 2026-01-05 01:04:56.116359 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-05 01:04:56.116365 | orchestrator | 2026-01-05 01:04:56.116372 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 01:04:56.116378 | orchestrator | Monday 05 January 2026 01:03:44 +0000 (0:00:00.536) 0:00:01.134 ******** 2026-01-05 01:04:56.116384 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:04:56.116391 | orchestrator | 2026-01-05 01:04:56.116397 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-05 01:04:56.116403 | orchestrator | Monday 05 January 2026 01:03:45 +0000 (0:00:00.646) 0:00:01.781 ******** 2026-01-05 01:04:56.116409 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (5 retries left). 2026-01-05 01:04:56.116415 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (4 retries left). 2026-01-05 01:04:56.116421 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (3 retries left). 2026-01-05 01:04:56.116427 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (2 retries left). 2026-01-05 01:04:56.116434 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (1 retries left). 2026-01-05 01:04:56.116491 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767575091.2546988-3261-257394178475123/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767575091.2546988-3261-257394178475123/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767575091.2546988-3261-257394178475123/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_1hm5uv2q/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_1hm5uv2q/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_1hm5uv2q/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_1hm5uv2q/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_1hm5uv2q/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:04:56.116512 | orchestrator | 2026-01-05 01:04:56.116518 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:04:56.116529 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.116536 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.116542 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:04:56.116549 | orchestrator | 2026-01-05 01:04:56.116555 | orchestrator | 2026-01-05 01:04:56.116561 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:04:56.116567 | orchestrator | Monday 05 January 2026 01:04:52 +0000 (0:01:07.158) 0:01:08.940 ******** 2026-01-05 01:04:56.116573 | orchestrator | =============================================================================== 2026-01-05 01:04:56.116580 | orchestrator | service-ks-register : designate | Creating services -------------------- 67.16s 2026-01-05 01:04:56.116586 | orchestrator | designate : include_tasks ----------------------------------------------- 0.65s 2026-01-05 01:04:56.116592 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-05 01:04:56.116598 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-05 01:04:56.116604 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 7fe86e08-6f30-4466-9205-5989e2e6ba5f is in state SUCCESS 2026-01-05 01:04:56.117209 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:04:56.118813 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:04:56.119679 | orchestrator | 2026-01-05 01:04:56 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:56.119723 | orchestrator | 2026-01-05 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:04:59.171618 | orchestrator | 2026-01-05 01:04:59.171711 | orchestrator | 2026-01-05 01:04:59.171719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:04:59.171727 | orchestrator | 2026-01-05 01:04:59.171734 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:04:59.171741 | orchestrator | Monday 05 January 2026 01:03:44 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-01-05 01:04:59.171746 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:59.171754 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:59.171759 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:59.171765 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:04:59.171771 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:04:59.171777 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:04:59.171782 | orchestrator | 2026-01-05 01:04:59.171788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:04:59.171811 | orchestrator | Monday 05 January 2026 01:03:45 +0000 (0:00:00.787) 0:00:01.070 ******** 2026-01-05 01:04:59.171817 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-05 01:04:59.171825 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-05 01:04:59.171831 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-05 01:04:59.171838 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-05 01:04:59.171844 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-05 01:04:59.171850 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-05 01:04:59.171856 | orchestrator | 2026-01-05 01:04:59.171863 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-05 01:04:59.171887 | orchestrator | 2026-01-05 01:04:59.171893 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:04:59.171917 | orchestrator | Monday 05 January 2026 01:03:45 +0000 (0:00:00.670) 0:00:01.740 ******** 2026-01-05 01:04:59.171932 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:04:59.171941 | orchestrator | 2026-01-05 01:04:59.171947 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-05 01:04:59.171953 | orchestrator | Monday 05 January 2026 01:03:47 +0000 (0:00:01.391) 0:00:03.131 ******** 2026-01-05 01:04:59.171959 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:59.171965 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:59.171971 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:59.171977 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:04:59.172073 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:04:59.172081 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:04:59.172087 | orchestrator | 2026-01-05 01:04:59.172094 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-05 01:04:59.172100 | orchestrator | Monday 05 January 2026 01:03:48 +0000 (0:00:01.434) 0:00:04.566 ******** 2026-01-05 01:04:59.172106 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:59.172112 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:59.172118 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:59.172124 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:04:59.172130 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:04:59.172136 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:04:59.172143 | orchestrator | 2026-01-05 01:04:59.172149 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-05 01:04:59.172156 | orchestrator | Monday 05 January 2026 01:03:49 +0000 (0:00:01.225) 0:00:05.792 ******** 2026-01-05 01:04:59.172162 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 01:04:59.172170 | orchestrator |  "changed": false, 2026-01-05 01:04:59.172176 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:04:59.172183 | orchestrator | } 2026-01-05 01:04:59.172189 | orchestrator | ok: [testbed-node-1] => { 2026-01-05 01:04:59.172196 | orchestrator |  "changed": false, 2026-01-05 01:04:59.172202 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:04:59.172208 | orchestrator | } 2026-01-05 01:04:59.172215 | orchestrator | ok: [testbed-node-2] => { 2026-01-05 01:04:59.172221 | orchestrator |  "changed": false, 2026-01-05 01:04:59.172228 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:04:59.172234 | orchestrator | } 2026-01-05 01:04:59.172241 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:04:59.172248 | orchestrator |  "changed": false, 2026-01-05 01:04:59.172254 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:04:59.172261 | orchestrator | } 2026-01-05 01:04:59.172268 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:04:59.172274 | orchestrator |  "changed": false, 2026-01-05 01:04:59.172280 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:04:59.172287 | orchestrator | } 2026-01-05 01:04:59.172293 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:04:59.172299 | orchestrator |  "changed": false, 2026-01-05 01:04:59.172306 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:04:59.172312 | orchestrator | } 2026-01-05 01:04:59.172319 | orchestrator | 2026-01-05 01:04:59.172325 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-05 01:04:59.172332 | orchestrator | Monday 05 January 2026 01:03:50 +0000 (0:00:00.916) 0:00:06.708 ******** 2026-01-05 01:04:59.172339 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:59.172346 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:59.172353 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:59.172360 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:04:59.172367 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:04:59.172374 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:04:59.172381 | orchestrator | 2026-01-05 01:04:59.172398 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-05 01:04:59.172405 | orchestrator | Monday 05 January 2026 01:03:51 +0000 (0:00:00.639) 0:00:07.348 ******** 2026-01-05 01:04:59.172412 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (5 retries left). 2026-01-05 01:04:59.172419 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (4 retries left). 2026-01-05 01:04:59.172425 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (3 retries left). 2026-01-05 01:04:59.172432 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (2 retries left). 2026-01-05 01:04:59.172438 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (1 retries left). 2026-01-05 01:04:59.172548 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767575096.8744376-3318-124959590572528/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767575096.8744376-3318-124959590572528/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767575096.8744376-3318-124959590572528/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_j8cwvip7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_j8cwvip7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_j8cwvip7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_j8cwvip7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_j8cwvip7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:04:59.172567 | orchestrator | 2026-01-05 01:04:59.172575 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:04:59.172582 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-05 01:04:59.172588 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:04:59.172595 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:04:59.172601 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:04:59.172607 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:04:59.172619 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:04:59.172626 | orchestrator | 2026-01-05 01:04:59.172632 | orchestrator | 2026-01-05 01:04:59.172638 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:04:59.172644 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:01:06.938) 0:01:14.287 ******** 2026-01-05 01:04:59.172650 | orchestrator | =============================================================================== 2026-01-05 01:04:59.172656 | orchestrator | service-ks-register : neutron | Creating services ---------------------- 66.94s 2026-01-05 01:04:59.172662 | orchestrator | neutron : Get container facts ------------------------------------------- 1.43s 2026-01-05 01:04:59.172668 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.39s 2026-01-05 01:04:59.172674 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.23s 2026-01-05 01:04:59.172680 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.92s 2026-01-05 01:04:59.172686 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2026-01-05 01:04:59.172693 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-01-05 01:04:59.172703 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.64s 2026-01-05 01:04:59.172711 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task dd114b84-7e37-4b00-a1d4-5f1d61828f7c is in state SUCCESS 2026-01-05 01:04:59.172719 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:04:59.176161 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:04:59.176232 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:04:59.176482 | orchestrator | 2026-01-05 01:04:59 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:04:59.176493 | orchestrator | 2026-01-05 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:02.234800 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:02.235623 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:02.240997 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:02.241170 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:02.243634 | orchestrator | 2026-01-05 01:05:02 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:02.243697 | orchestrator | 2026-01-05 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:05.294967 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:05.298518 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:05.300080 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:05.301621 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:05.303620 | orchestrator | 2026-01-05 01:05:05 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:05.303658 | orchestrator | 2026-01-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:08.358285 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:08.360562 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:08.363125 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:08.365218 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:08.367117 | orchestrator | 2026-01-05 01:05:08 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:08.367166 | orchestrator | 2026-01-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:11.414782 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:11.417807 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:11.419468 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:11.422826 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:11.425346 | orchestrator | 2026-01-05 01:05:11 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:11.425569 | orchestrator | 2026-01-05 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:14.459081 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:14.460684 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:14.461967 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:14.463158 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:14.464240 | orchestrator | 2026-01-05 01:05:14 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:14.464279 | orchestrator | 2026-01-05 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:17.501371 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:17.501581 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:17.502005 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:17.502698 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:17.503598 | orchestrator | 2026-01-05 01:05:17 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:17.503630 | orchestrator | 2026-01-05 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:20.540261 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:20.540741 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:20.541501 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:20.542268 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:20.543476 | orchestrator | 2026-01-05 01:05:20 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:20.543554 | orchestrator | 2026-01-05 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:23.594480 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:23.597615 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:23.598824 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:23.601089 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:23.602887 | orchestrator | 2026-01-05 01:05:23 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:23.602975 | orchestrator | 2026-01-05 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:26.656833 | orchestrator | 2026-01-05 01:05:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:26.657189 | orchestrator | 2026-01-05 01:05:26 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:26.658990 | orchestrator | 2026-01-05 01:05:26 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:26.660062 | orchestrator | 2026-01-05 01:05:26 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:26.661620 | orchestrator | 2026-01-05 01:05:26 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:26.661646 | orchestrator | 2026-01-05 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:29.714356 | orchestrator | 2026-01-05 01:05:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:29.714915 | orchestrator | 2026-01-05 01:05:29 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state STARTED 2026-01-05 01:05:29.716125 | orchestrator | 2026-01-05 01:05:29 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:29.717431 | orchestrator | 2026-01-05 01:05:29 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:29.718333 | orchestrator | 2026-01-05 01:05:29 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:29.718383 | orchestrator | 2026-01-05 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:32.763585 | orchestrator | 2026-01-05 01:05:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:32.765265 | orchestrator | 2026-01-05 01:05:32 | INFO  | Task bce4d2da-69f3-48f7-b791-ff2f86de16b3 is in state SUCCESS 2026-01-05 01:05:32.767389 | orchestrator | 2026-01-05 01:05:32 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:32.769690 | orchestrator | 2026-01-05 01:05:32 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:32.771808 | orchestrator | 2026-01-05 01:05:32 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:32.775068 | orchestrator | 2026-01-05 01:05:32 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:32.775222 | orchestrator | 2026-01-05 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:35.828039 | orchestrator | 2026-01-05 01:05:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:35.831033 | orchestrator | 2026-01-05 01:05:35 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:35.833311 | orchestrator | 2026-01-05 01:05:35 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:35.836419 | orchestrator | 2026-01-05 01:05:35 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:35.839271 | orchestrator | 2026-01-05 01:05:35 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:35.839334 | orchestrator | 2026-01-05 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:38.892564 | orchestrator | 2026-01-05 01:05:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:38.894048 | orchestrator | 2026-01-05 01:05:38 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:38.896096 | orchestrator | 2026-01-05 01:05:38 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:38.898335 | orchestrator | 2026-01-05 01:05:38 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:38.901053 | orchestrator | 2026-01-05 01:05:38 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:38.901126 | orchestrator | 2026-01-05 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:41.960163 | orchestrator | 2026-01-05 01:05:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:41.961543 | orchestrator | 2026-01-05 01:05:41 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:41.963668 | orchestrator | 2026-01-05 01:05:41 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:41.965663 | orchestrator | 2026-01-05 01:05:41 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:41.967138 | orchestrator | 2026-01-05 01:05:41 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:41.967179 | orchestrator | 2026-01-05 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:45.010763 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:45.011837 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:45.013446 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:45.016475 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:45.018716 | orchestrator | 2026-01-05 01:05:45 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:45.019212 | orchestrator | 2026-01-05 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:48.064420 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:48.067239 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:48.069678 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:48.072080 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:48.073837 | orchestrator | 2026-01-05 01:05:48 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:48.073905 | orchestrator | 2026-01-05 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:51.116594 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:51.116947 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:51.118163 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:51.119146 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:51.120272 | orchestrator | 2026-01-05 01:05:51 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:51.120530 | orchestrator | 2026-01-05 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:54.160101 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:54.162158 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:54.164245 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:54.166541 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:54.168326 | orchestrator | 2026-01-05 01:05:54 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:54.168382 | orchestrator | 2026-01-05 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:05:57.215702 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:05:57.218929 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:05:57.220759 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:05:57.222869 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:05:57.224345 | orchestrator | 2026-01-05 01:05:57 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:05:57.224409 | orchestrator | 2026-01-05 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:00.271705 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:00.272660 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:00.275833 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:06:00.278817 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:06:00.282184 | orchestrator | 2026-01-05 01:06:00 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:00.282242 | orchestrator | 2026-01-05 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:03.332248 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:03.333794 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:03.336075 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:06:03.338059 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:06:03.340763 | orchestrator | 2026-01-05 01:06:03 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:03.340824 | orchestrator | 2026-01-05 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:06.393502 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:06.396011 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:06.398913 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state STARTED 2026-01-05 01:06:06.401692 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state STARTED 2026-01-05 01:06:06.403456 | orchestrator | 2026-01-05 01:06:06 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:06.403507 | orchestrator | 2026-01-05 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:09.459291 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:09.466578 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:09.468216 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:09.471781 | orchestrator | 2026-01-05 01:06:09.471836 | orchestrator | 2026-01-05 01:06:09.471843 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-05 01:06:09.471848 | orchestrator | 2026-01-05 01:06:09.471853 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-05 01:06:09.471858 | orchestrator | Monday 05 January 2026 01:04:35 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-05 01:06:09.471863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-05 01:06:09.471868 | orchestrator | 2026-01-05 01:06:09.471883 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-05 01:06:09.471887 | orchestrator | Monday 05 January 2026 01:04:35 +0000 (0:00:00.265) 0:00:00.512 ******** 2026-01-05 01:06:09.471892 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-05 01:06:09.471896 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-05 01:06:09.471901 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-05 01:06:09.471905 | orchestrator | 2026-01-05 01:06:09.471909 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-05 01:06:09.471913 | orchestrator | Monday 05 January 2026 01:04:37 +0000 (0:00:01.371) 0:00:01.884 ******** 2026-01-05 01:06:09.471918 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-05 01:06:09.471922 | orchestrator | 2026-01-05 01:06:09.471925 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-05 01:06:09.471929 | orchestrator | Monday 05 January 2026 01:04:38 +0000 (0:00:01.703) 0:00:03.587 ******** 2026-01-05 01:06:09.471934 | orchestrator | changed: [testbed-manager] 2026-01-05 01:06:09.471938 | orchestrator | 2026-01-05 01:06:09.471942 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-05 01:06:09.471946 | orchestrator | Monday 05 January 2026 01:04:39 +0000 (0:00:00.971) 0:00:04.559 ******** 2026-01-05 01:06:09.471950 | orchestrator | changed: [testbed-manager] 2026-01-05 01:06:09.471954 | orchestrator | 2026-01-05 01:06:09.471958 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-05 01:06:09.471962 | orchestrator | Monday 05 January 2026 01:04:40 +0000 (0:00:00.955) 0:00:05.514 ******** 2026-01-05 01:06:09.471983 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-05 01:06:09.471987 | orchestrator | ok: [testbed-manager] 2026-01-05 01:06:09.471991 | orchestrator | 2026-01-05 01:06:09.471995 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-05 01:06:09.471999 | orchestrator | Monday 05 January 2026 01:05:20 +0000 (0:00:39.856) 0:00:45.371 ******** 2026-01-05 01:06:09.472003 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-05 01:06:09.472007 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-05 01:06:09.472011 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-05 01:06:09.472015 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:06:09.472019 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-05 01:06:09.472023 | orchestrator | 2026-01-05 01:06:09.472027 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-05 01:06:09.472031 | orchestrator | Monday 05 January 2026 01:05:24 +0000 (0:00:03.966) 0:00:49.337 ******** 2026-01-05 01:06:09.472035 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-05 01:06:09.472039 | orchestrator | 2026-01-05 01:06:09.472043 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-05 01:06:09.472047 | orchestrator | Monday 05 January 2026 01:05:24 +0000 (0:00:00.498) 0:00:49.835 ******** 2026-01-05 01:06:09.472051 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:06:09.472054 | orchestrator | 2026-01-05 01:06:09.472059 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-05 01:06:09.472063 | orchestrator | Monday 05 January 2026 01:05:25 +0000 (0:00:00.139) 0:00:49.975 ******** 2026-01-05 01:06:09.472066 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:06:09.472070 | orchestrator | 2026-01-05 01:06:09.472074 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-05 01:06:09.472078 | orchestrator | Monday 05 January 2026 01:05:25 +0000 (0:00:00.550) 0:00:50.526 ******** 2026-01-05 01:06:09.472082 | orchestrator | changed: [testbed-manager] 2026-01-05 01:06:09.472086 | orchestrator | 2026-01-05 01:06:09.472090 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-05 01:06:09.472094 | orchestrator | Monday 05 January 2026 01:05:27 +0000 (0:00:01.440) 0:00:51.966 ******** 2026-01-05 01:06:09.472098 | orchestrator | changed: [testbed-manager] 2026-01-05 01:06:09.472102 | orchestrator | 2026-01-05 01:06:09.472109 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-05 01:06:09.472115 | orchestrator | Monday 05 January 2026 01:05:27 +0000 (0:00:00.779) 0:00:52.746 ******** 2026-01-05 01:06:09.472121 | orchestrator | changed: [testbed-manager] 2026-01-05 01:06:09.472128 | orchestrator | 2026-01-05 01:06:09.472134 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-05 01:06:09.472141 | orchestrator | Monday 05 January 2026 01:05:28 +0000 (0:00:00.546) 0:00:53.293 ******** 2026-01-05 01:06:09.472148 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-05 01:06:09.472155 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-05 01:06:09.472159 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:06:09.472163 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-05 01:06:09.472167 | orchestrator | 2026-01-05 01:06:09.472171 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:06:09.472175 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:06:09.472180 | orchestrator | 2026-01-05 01:06:09.472184 | orchestrator | 2026-01-05 01:06:09.472198 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:06:09.472202 | orchestrator | Monday 05 January 2026 01:05:29 +0000 (0:00:01.350) 0:00:54.643 ******** 2026-01-05 01:06:09.472206 | orchestrator | =============================================================================== 2026-01-05 01:06:09.472210 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.86s 2026-01-05 01:06:09.472220 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.97s 2026-01-05 01:06:09.472224 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.70s 2026-01-05 01:06:09.472231 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.44s 2026-01-05 01:06:09.472235 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.37s 2026-01-05 01:06:09.472239 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.35s 2026-01-05 01:06:09.472243 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-01-05 01:06:09.472247 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-01-05 01:06:09.472251 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-01-05 01:06:09.472256 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-01-05 01:06:09.472262 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.55s 2026-01-05 01:06:09.472268 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-01-05 01:06:09.472278 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-01-05 01:06:09.472285 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-01-05 01:06:09.472293 | orchestrator | 2026-01-05 01:06:09.472299 | orchestrator | 2026-01-05 01:06:09.472305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:06:09.472310 | orchestrator | 2026-01-05 01:06:09.472316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:06:09.472322 | orchestrator | Monday 05 January 2026 01:04:57 +0000 (0:00:00.238) 0:00:00.238 ******** 2026-01-05 01:06:09.472371 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:06:09.472378 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:06:09.472384 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:06:09.472391 | orchestrator | 2026-01-05 01:06:09.472397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:06:09.472404 | orchestrator | Monday 05 January 2026 01:04:57 +0000 (0:00:00.290) 0:00:00.529 ******** 2026-01-05 01:06:09.472411 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-05 01:06:09.472417 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-05 01:06:09.472424 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-05 01:06:09.472432 | orchestrator | 2026-01-05 01:06:09.472437 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-05 01:06:09.472442 | orchestrator | 2026-01-05 01:06:09.472446 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 01:06:09.472451 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:00:00.409) 0:00:00.939 ******** 2026-01-05 01:06:09.472456 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:06:09.472462 | orchestrator | 2026-01-05 01:06:09.472467 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-05 01:06:09.472472 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:00:00.544) 0:00:01.483 ******** 2026-01-05 01:06:09.472477 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (5 retries left). 2026-01-05 01:06:09.472481 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (4 retries left). 2026-01-05 01:06:09.472486 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (3 retries left). 2026-01-05 01:06:09.472491 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (2 retries left). 2026-01-05 01:06:09.472496 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (1 retries left). 2026-01-05 01:06:09.472528 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767575165.1980119-3741-125854109944392/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767575165.1980119-3741-125854109944392/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767575165.1980119-3741-125854109944392/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload__5na3_y5/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload__5na3_y5/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload__5na3_y5/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload__5na3_y5/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload__5na3_y5/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:06:09.472547 | orchestrator | 2026-01-05 01:06:09.472553 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:06:09.472558 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:06:09.472563 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:06:09.472568 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:06:09.472573 | orchestrator | 2026-01-05 01:06:09.472578 | orchestrator | 2026-01-05 01:06:09.472583 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:06:09.472588 | orchestrator | Monday 05 January 2026 01:06:06 +0000 (0:01:07.741) 0:01:09.225 ******** 2026-01-05 01:06:09.472592 | orchestrator | =============================================================================== 2026-01-05 01:06:09.472597 | orchestrator | service-ks-register : magnum | Creating services ----------------------- 67.74s 2026-01-05 01:06:09.472602 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.54s 2026-01-05 01:06:09.472607 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-01-05 01:06:09.472612 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-01-05 01:06:09.472617 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 2d4d2aba-07b5-465c-8f60-d51dff8990e1 is in state SUCCESS 2026-01-05 01:06:09.472622 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 2d1a48bc-e9f0-4a10-8aed-5e0e52b44ca6 is in state SUCCESS 2026-01-05 01:06:09.472627 | orchestrator | 2026-01-05 01:06:09.472635 | orchestrator | 2026-01-05 01:06:09.472640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:06:09.472645 | orchestrator | 2026-01-05 01:06:09.472649 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:06:09.472654 | orchestrator | Monday 05 January 2026 01:04:57 +0000 (0:00:00.248) 0:00:00.248 ******** 2026-01-05 01:06:09.472659 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:06:09.472664 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:06:09.472669 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:06:09.472674 | orchestrator | 2026-01-05 01:06:09.472679 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:06:09.472683 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:00:00.318) 0:00:00.566 ******** 2026-01-05 01:06:09.472688 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-05 01:06:09.472693 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-05 01:06:09.472698 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-05 01:06:09.472702 | orchestrator | 2026-01-05 01:06:09.472706 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-05 01:06:09.472710 | orchestrator | 2026-01-05 01:06:09.472714 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:06:09.472718 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:00:00.426) 0:00:00.993 ******** 2026-01-05 01:06:09.472722 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:06:09.472726 | orchestrator | 2026-01-05 01:06:09.472730 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-05 01:06:09.472734 | orchestrator | Monday 05 January 2026 01:04:59 +0000 (0:00:00.621) 0:00:01.615 ******** 2026-01-05 01:06:09.472738 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (5 retries left). 2026-01-05 01:06:09.472742 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (4 retries left). 2026-01-05 01:06:09.472746 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (3 retries left). 2026-01-05 01:06:09.472753 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (2 retries left). 2026-01-05 01:06:09.472757 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (1 retries left). 2026-01-05 01:06:09.472773 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767575164.9151702-3723-16047294721663/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767575164.9151702-3723-16047294721663/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767575164.9151702-3723-16047294721663/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_92lz9jd4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_92lz9jd4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_92lz9jd4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_92lz9jd4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_92lz9jd4/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-05 01:06:09.472882 | orchestrator | 2026-01-05 01:06:09.472892 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:06:09.472898 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-05 01:06:09.472905 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:06:09.472912 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:06:09.472919 | orchestrator | 2026-01-05 01:06:09.472926 | orchestrator | 2026-01-05 01:06:09.472932 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:06:09.472939 | orchestrator | Monday 05 January 2026 01:06:06 +0000 (0:01:07.241) 0:01:08.856 ******** 2026-01-05 01:06:09.472945 | orchestrator | =============================================================================== 2026-01-05 01:06:09.472952 | orchestrator | service-ks-register : placement | Creating services -------------------- 67.24s 2026-01-05 01:06:09.472958 | orchestrator | placement : include_tasks ----------------------------------------------- 0.62s 2026-01-05 01:06:09.472965 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-01-05 01:06:09.472972 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-05 01:06:09.472981 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:09.474725 | orchestrator | 2026-01-05 01:06:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:09.474800 | orchestrator | 2026-01-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:12.535606 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:12.536209 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:12.537475 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:12.538291 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:12.539266 | orchestrator | 2026-01-05 01:06:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:12.539306 | orchestrator | 2026-01-05 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:15.583934 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:15.584420 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:15.585145 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:15.587472 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:15.588258 | orchestrator | 2026-01-05 01:06:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:15.588314 | orchestrator | 2026-01-05 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:18.629955 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:18.630116 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:18.630883 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:18.633058 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:18.634426 | orchestrator | 2026-01-05 01:06:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:18.634482 | orchestrator | 2026-01-05 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:21.671540 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:21.671631 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:21.675440 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:21.677624 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:21.679022 | orchestrator | 2026-01-05 01:06:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:21.679074 | orchestrator | 2026-01-05 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:24.730245 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:24.730666 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:24.734254 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:24.736720 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:24.739135 | orchestrator | 2026-01-05 01:06:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:24.739200 | orchestrator | 2026-01-05 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:27.791951 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:27.792033 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:27.793072 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:27.793980 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:27.796337 | orchestrator | 2026-01-05 01:06:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:27.796374 | orchestrator | 2026-01-05 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:30.840535 | orchestrator | 2026-01-05 01:06:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:30.840791 | orchestrator | 2026-01-05 01:06:30 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:30.843458 | orchestrator | 2026-01-05 01:06:30 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:30.844825 | orchestrator | 2026-01-05 01:06:30 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:30.846376 | orchestrator | 2026-01-05 01:06:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:30.846461 | orchestrator | 2026-01-05 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:33.884773 | orchestrator | 2026-01-05 01:06:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:33.886644 | orchestrator | 2026-01-05 01:06:33 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:33.887828 | orchestrator | 2026-01-05 01:06:33 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:33.890120 | orchestrator | 2026-01-05 01:06:33 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:33.892533 | orchestrator | 2026-01-05 01:06:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:33.892740 | orchestrator | 2026-01-05 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:36.936019 | orchestrator | 2026-01-05 01:06:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:36.936535 | orchestrator | 2026-01-05 01:06:36 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:36.937881 | orchestrator | 2026-01-05 01:06:36 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:36.938190 | orchestrator | 2026-01-05 01:06:36 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:36.938909 | orchestrator | 2026-01-05 01:06:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:36.938946 | orchestrator | 2026-01-05 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:40.558696 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:40.559528 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:40.560540 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:40.561553 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:40.562524 | orchestrator | 2026-01-05 01:06:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:40.562703 | orchestrator | 2026-01-05 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:44.054834 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:44.054925 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:44.054932 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:44.054937 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:44.054942 | orchestrator | 2026-01-05 01:06:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:44.054947 | orchestrator | 2026-01-05 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:46.663157 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:46.664856 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:46.665973 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:46.666537 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:46.667199 | orchestrator | 2026-01-05 01:06:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:46.667224 | orchestrator | 2026-01-05 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:49.726584 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:49.727884 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:49.728933 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:49.730931 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:49.732566 | orchestrator | 2026-01-05 01:06:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:49.732844 | orchestrator | 2026-01-05 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:52.786342 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:52.788663 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:52.790737 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:52.792245 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:52.794167 | orchestrator | 2026-01-05 01:06:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:52.794210 | orchestrator | 2026-01-05 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:55.850606 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:55.850707 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:55.852955 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:55.855980 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:55.860144 | orchestrator | 2026-01-05 01:06:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:55.860213 | orchestrator | 2026-01-05 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:06:58.906793 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:06:58.909908 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:06:58.912130 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:06:58.914233 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:06:58.916377 | orchestrator | 2026-01-05 01:06:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:06:58.916677 | orchestrator | 2026-01-05 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:01.965335 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:01.966908 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:01.968710 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:07:01.970493 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:01.972205 | orchestrator | 2026-01-05 01:07:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:01.972343 | orchestrator | 2026-01-05 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:04.999886 | orchestrator | 2026-01-05 01:07:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:05.000383 | orchestrator | 2026-01-05 01:07:05 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:05.002306 | orchestrator | 2026-01-05 01:07:05 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:07:05.003438 | orchestrator | 2026-01-05 01:07:05 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:05.004477 | orchestrator | 2026-01-05 01:07:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:05.004507 | orchestrator | 2026-01-05 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:08.071269 | orchestrator | 2026-01-05 01:07:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:08.073449 | orchestrator | 2026-01-05 01:07:08 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:08.075092 | orchestrator | 2026-01-05 01:07:08 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state STARTED 2026-01-05 01:07:08.076831 | orchestrator | 2026-01-05 01:07:08 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:08.078512 | orchestrator | 2026-01-05 01:07:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:08.078556 | orchestrator | 2026-01-05 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:11.130944 | orchestrator | 2026-01-05 01:07:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:11.133221 | orchestrator | 2026-01-05 01:07:11 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:11.137012 | orchestrator | 2026-01-05 01:07:11 | INFO  | Task b1bce162-7704-4059-99ef-df4e48953f81 is in state SUCCESS 2026-01-05 01:07:11.140710 | orchestrator | 2026-01-05 01:07:11 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:11.143620 | orchestrator | 2026-01-05 01:07:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:11.143807 | orchestrator | 2026-01-05 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:14.200717 | orchestrator | 2026-01-05 01:07:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:14.202566 | orchestrator | 2026-01-05 01:07:14 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:14.203846 | orchestrator | 2026-01-05 01:07:14 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:14.205801 | orchestrator | 2026-01-05 01:07:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:14.206256 | orchestrator | 2026-01-05 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:17.265075 | orchestrator | 2026-01-05 01:07:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:17.266618 | orchestrator | 2026-01-05 01:07:17 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:17.268576 | orchestrator | 2026-01-05 01:07:17 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:17.270398 | orchestrator | 2026-01-05 01:07:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:17.271001 | orchestrator | 2026-01-05 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:20.324686 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:20.324797 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:20.325863 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:20.327341 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:20.327394 | orchestrator | 2026-01-05 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:23.369621 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:23.369720 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:23.369732 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:23.370694 | orchestrator | 2026-01-05 01:07:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:23.370845 | orchestrator | 2026-01-05 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:26.415129 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:26.418302 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:26.420879 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:26.421918 | orchestrator | 2026-01-05 01:07:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:26.421996 | orchestrator | 2026-01-05 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:29.477877 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:29.478009 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:29.480048 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:29.480969 | orchestrator | 2026-01-05 01:07:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:29.481016 | orchestrator | 2026-01-05 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:32.537945 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:32.540406 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:32.542718 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:32.544750 | orchestrator | 2026-01-05 01:07:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:32.544810 | orchestrator | 2026-01-05 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:35.594378 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:35.596606 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:35.598311 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:35.600078 | orchestrator | 2026-01-05 01:07:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:35.600128 | orchestrator | 2026-01-05 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:38.661447 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:38.662507 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:38.663697 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:38.665549 | orchestrator | 2026-01-05 01:07:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:38.665592 | orchestrator | 2026-01-05 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:41.702548 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:41.706257 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:41.706326 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:41.706886 | orchestrator | 2026-01-05 01:07:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:41.706897 | orchestrator | 2026-01-05 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:44.751107 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:44.751756 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:44.753880 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:44.755643 | orchestrator | 2026-01-05 01:07:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:44.755710 | orchestrator | 2026-01-05 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:47.804004 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:47.804497 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:47.805806 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:47.807495 | orchestrator | 2026-01-05 01:07:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:47.807541 | orchestrator | 2026-01-05 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:50.857587 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:50.859585 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:50.862297 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:50.864254 | orchestrator | 2026-01-05 01:07:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:50.864318 | orchestrator | 2026-01-05 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:53.914921 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:53.916053 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:53.917945 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:53.920240 | orchestrator | 2026-01-05 01:07:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:53.920298 | orchestrator | 2026-01-05 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:07:56.972456 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:07:56.974269 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:07:56.977398 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:07:56.980056 | orchestrator | 2026-01-05 01:07:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:07:56.980101 | orchestrator | 2026-01-05 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:00.035204 | orchestrator | 2026-01-05 01:08:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:00.038953 | orchestrator | 2026-01-05 01:08:00 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:00.039962 | orchestrator | 2026-01-05 01:08:00 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:00.042118 | orchestrator | 2026-01-05 01:08:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:00.042190 | orchestrator | 2026-01-05 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:03.080021 | orchestrator | 2026-01-05 01:08:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:03.081765 | orchestrator | 2026-01-05 01:08:03 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:03.084423 | orchestrator | 2026-01-05 01:08:03 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:03.085936 | orchestrator | 2026-01-05 01:08:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:03.085984 | orchestrator | 2026-01-05 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:06.131178 | orchestrator | 2026-01-05 01:08:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:06.131405 | orchestrator | 2026-01-05 01:08:06 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:06.132899 | orchestrator | 2026-01-05 01:08:06 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:06.133654 | orchestrator | 2026-01-05 01:08:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:06.133748 | orchestrator | 2026-01-05 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:09.201455 | orchestrator | 2026-01-05 01:08:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:09.238298 | orchestrator | 2026-01-05 01:08:09 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:09.241433 | orchestrator | 2026-01-05 01:08:09 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:09.241885 | orchestrator | 2026-01-05 01:08:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:09.241915 | orchestrator | 2026-01-05 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:12.294530 | orchestrator | 2026-01-05 01:08:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:12.296927 | orchestrator | 2026-01-05 01:08:12 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:12.298885 | orchestrator | 2026-01-05 01:08:12 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:12.300266 | orchestrator | 2026-01-05 01:08:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:12.300696 | orchestrator | 2026-01-05 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:15.360332 | orchestrator | 2026-01-05 01:08:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:15.360424 | orchestrator | 2026-01-05 01:08:15 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:15.361958 | orchestrator | 2026-01-05 01:08:15 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:15.363550 | orchestrator | 2026-01-05 01:08:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:15.363631 | orchestrator | 2026-01-05 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:18.439234 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:18.441033 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:18.443867 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:18.447945 | orchestrator | 2026-01-05 01:08:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:18.448598 | orchestrator | 2026-01-05 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:21.501753 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:21.503224 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:21.505274 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:21.507655 | orchestrator | 2026-01-05 01:08:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:21.507707 | orchestrator | 2026-01-05 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:24.555291 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:24.557311 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:24.561437 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:24.562729 | orchestrator | 2026-01-05 01:08:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:24.562755 | orchestrator | 2026-01-05 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:27.602339 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:27.603029 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:27.606050 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:27.606847 | orchestrator | 2026-01-05 01:08:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:27.606953 | orchestrator | 2026-01-05 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:30.654080 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:30.655716 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:30.658073 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:30.659754 | orchestrator | 2026-01-05 01:08:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:30.659773 | orchestrator | 2026-01-05 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:33.703734 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:33.705018 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:33.706753 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:33.707727 | orchestrator | 2026-01-05 01:08:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:33.707884 | orchestrator | 2026-01-05 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:36.760598 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:36.761436 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:36.763769 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:36.766704 | orchestrator | 2026-01-05 01:08:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:36.766770 | orchestrator | 2026-01-05 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:39.821546 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:39.823554 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:39.825728 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:39.827696 | orchestrator | 2026-01-05 01:08:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:39.827904 | orchestrator | 2026-01-05 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:42.889817 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:42.891025 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:42.892247 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:42.893776 | orchestrator | 2026-01-05 01:08:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:42.893811 | orchestrator | 2026-01-05 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:45.967520 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:45.967619 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:45.969168 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:45.970307 | orchestrator | 2026-01-05 01:08:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:45.970344 | orchestrator | 2026-01-05 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:49.047297 | orchestrator | 2026-01-05 01:08:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:49.047486 | orchestrator | 2026-01-05 01:08:49 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:49.048918 | orchestrator | 2026-01-05 01:08:49 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:49.049699 | orchestrator | 2026-01-05 01:08:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:49.049744 | orchestrator | 2026-01-05 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:52.151904 | orchestrator | 2026-01-05 01:08:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:52.152784 | orchestrator | 2026-01-05 01:08:52 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:52.154293 | orchestrator | 2026-01-05 01:08:52 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:52.155319 | orchestrator | 2026-01-05 01:08:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:52.155359 | orchestrator | 2026-01-05 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:55.234962 | orchestrator | 2026-01-05 01:08:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:55.235725 | orchestrator | 2026-01-05 01:08:55 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:55.241508 | orchestrator | 2026-01-05 01:08:55 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:55.241572 | orchestrator | 2026-01-05 01:08:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:55.241579 | orchestrator | 2026-01-05 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:08:58.296670 | orchestrator | 2026-01-05 01:08:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:08:58.301011 | orchestrator | 2026-01-05 01:08:58 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:08:58.304147 | orchestrator | 2026-01-05 01:08:58 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:08:58.306830 | orchestrator | 2026-01-05 01:08:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:08:58.306932 | orchestrator | 2026-01-05 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:01.352813 | orchestrator | 2026-01-05 01:09:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:01.352907 | orchestrator | 2026-01-05 01:09:01 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:09:01.352918 | orchestrator | 2026-01-05 01:09:01 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:01.354517 | orchestrator | 2026-01-05 01:09:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:01.354583 | orchestrator | 2026-01-05 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:04.402583 | orchestrator | 2026-01-05 01:09:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:04.405370 | orchestrator | 2026-01-05 01:09:04 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:09:04.408396 | orchestrator | 2026-01-05 01:09:04 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:04.408455 | orchestrator | 2026-01-05 01:09:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:04.408730 | orchestrator | 2026-01-05 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:07.454402 | orchestrator | 2026-01-05 01:09:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:07.454551 | orchestrator | 2026-01-05 01:09:07 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:09:07.455340 | orchestrator | 2026-01-05 01:09:07 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:07.456560 | orchestrator | 2026-01-05 01:09:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:07.456686 | orchestrator | 2026-01-05 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:10.506831 | orchestrator | 2026-01-05 01:09:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:10.507619 | orchestrator | 2026-01-05 01:09:10 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:09:10.509691 | orchestrator | 2026-01-05 01:09:10 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:10.511721 | orchestrator | 2026-01-05 01:09:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:10.511791 | orchestrator | 2026-01-05 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:13.562513 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:13.562783 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:09:13.565018 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:13.568154 | orchestrator | 2026-01-05 01:09:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:13.568212 | orchestrator | 2026-01-05 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:16.611538 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:16.613965 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state STARTED 2026-01-05 01:09:16.616622 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:16.619982 | orchestrator | 2026-01-05 01:09:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:16.620194 | orchestrator | 2026-01-05 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:19.687154 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:19.690583 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:19.698404 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task c27a8a8c-2c3d-4438-9cc4-fab7cca671b9 is in state SUCCESS 2026-01-05 01:09:19.699116 | orchestrator | 2026-01-05 01:09:19.699159 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:09:19.699166 | orchestrator | 2.16.14 2026-01-05 01:09:19.699213 | orchestrator | 2026-01-05 01:09:19.699220 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-05 01:09:19.699227 | orchestrator | 2026-01-05 01:09:19.699241 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-05 01:09:19.699260 | orchestrator | Monday 05 January 2026 01:05:34 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-01-05 01:09:19.699267 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699275 | orchestrator | 2026-01-05 01:09:19.699281 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-05 01:09:19.699287 | orchestrator | Monday 05 January 2026 01:05:35 +0000 (0:00:01.571) 0:00:01.815 ******** 2026-01-05 01:09:19.699302 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699316 | orchestrator | 2026-01-05 01:09:19.699322 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-05 01:09:19.699328 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:01.069) 0:00:02.884 ******** 2026-01-05 01:09:19.699335 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699342 | orchestrator | 2026-01-05 01:09:19.699349 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-05 01:09:19.699357 | orchestrator | Monday 05 January 2026 01:05:37 +0000 (0:00:01.039) 0:00:03.924 ******** 2026-01-05 01:09:19.699363 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699369 | orchestrator | 2026-01-05 01:09:19.699375 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-05 01:09:19.699381 | orchestrator | Monday 05 January 2026 01:05:39 +0000 (0:00:01.282) 0:00:05.206 ******** 2026-01-05 01:09:19.699389 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699395 | orchestrator | 2026-01-05 01:09:19.699401 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-05 01:09:19.699407 | orchestrator | Monday 05 January 2026 01:05:40 +0000 (0:00:01.070) 0:00:06.276 ******** 2026-01-05 01:09:19.699414 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699420 | orchestrator | 2026-01-05 01:09:19.699427 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-05 01:09:19.699433 | orchestrator | Monday 05 January 2026 01:05:41 +0000 (0:00:01.105) 0:00:07.382 ******** 2026-01-05 01:09:19.699440 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699446 | orchestrator | 2026-01-05 01:09:19.699453 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-05 01:09:19.699458 | orchestrator | Monday 05 January 2026 01:05:43 +0000 (0:00:02.106) 0:00:09.488 ******** 2026-01-05 01:09:19.699462 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699466 | orchestrator | 2026-01-05 01:09:19.699470 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-05 01:09:19.699474 | orchestrator | Monday 05 January 2026 01:05:44 +0000 (0:00:01.283) 0:00:10.771 ******** 2026-01-05 01:09:19.699477 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.699501 | orchestrator | 2026-01-05 01:09:19.699507 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-05 01:09:19.699514 | orchestrator | Monday 05 January 2026 01:06:44 +0000 (0:00:59.478) 0:01:10.250 ******** 2026-01-05 01:09:19.699520 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.699526 | orchestrator | 2026-01-05 01:09:19.699532 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:09:19.699539 | orchestrator | 2026-01-05 01:09:19.699546 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:09:19.699552 | orchestrator | Monday 05 January 2026 01:06:44 +0000 (0:00:00.110) 0:01:10.360 ******** 2026-01-05 01:09:19.699558 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.699564 | orchestrator | 2026-01-05 01:09:19.699571 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:09:19.699577 | orchestrator | 2026-01-05 01:09:19.699583 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:09:19.699590 | orchestrator | Monday 05 January 2026 01:06:55 +0000 (0:00:11.596) 0:01:21.957 ******** 2026-01-05 01:09:19.699596 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.699603 | orchestrator | 2026-01-05 01:09:19.699609 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:09:19.699616 | orchestrator | 2026-01-05 01:09:19.699622 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:09:19.699628 | orchestrator | Monday 05 January 2026 01:07:07 +0000 (0:00:11.303) 0:01:33.260 ******** 2026-01-05 01:09:19.699634 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.699640 | orchestrator | 2026-01-05 01:09:19.699647 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:09:19.699653 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:09:19.699658 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:09:19.699662 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:09:19.699666 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:09:19.699670 | orchestrator | 2026-01-05 01:09:19.699674 | orchestrator | 2026-01-05 01:09:19.699677 | orchestrator | 2026-01-05 01:09:19.699681 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:09:19.699685 | orchestrator | Monday 05 January 2026 01:07:08 +0000 (0:00:01.226) 0:01:34.488 ******** 2026-01-05 01:09:19.699689 | orchestrator | =============================================================================== 2026-01-05 01:09:19.699693 | orchestrator | Create admin user ------------------------------------------------------ 59.48s 2026-01-05 01:09:19.699710 | orchestrator | Restart ceph manager service ------------------------------------------- 24.13s 2026-01-05 01:09:19.699713 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.11s 2026-01-05 01:09:19.699717 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.57s 2026-01-05 01:09:19.699722 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.28s 2026-01-05 01:09:19.699726 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2026-01-05 01:09:19.699731 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2026-01-05 01:09:19.699735 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2026-01-05 01:09:19.699740 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.07s 2026-01-05 01:09:19.699744 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2026-01-05 01:09:19.699748 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2026-01-05 01:09:19.699758 | orchestrator | 2026-01-05 01:09:19.701999 | orchestrator | 2026-01-05 01:09:19.702114 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:09:19.702127 | orchestrator | 2026-01-05 01:09:19.702163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:09:19.702170 | orchestrator | Monday 05 January 2026 01:06:11 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-01-05 01:09:19.702176 | orchestrator | ok: [testbed-manager] 2026-01-05 01:09:19.702183 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:19.702190 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:19.702221 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:19.702229 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:09:19.702236 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:09:19.702243 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:09:19.702249 | orchestrator | 2026-01-05 01:09:19.702257 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:09:19.702300 | orchestrator | Monday 05 January 2026 01:06:12 +0000 (0:00:00.872) 0:00:01.166 ******** 2026-01-05 01:09:19.702309 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702318 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702325 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702331 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702338 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702345 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702351 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-05 01:09:19.702358 | orchestrator | 2026-01-05 01:09:19.702385 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-05 01:09:19.702392 | orchestrator | 2026-01-05 01:09:19.702399 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 01:09:19.702405 | orchestrator | Monday 05 January 2026 01:06:13 +0000 (0:00:00.767) 0:00:01.933 ******** 2026-01-05 01:09:19.702413 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:09:19.702422 | orchestrator | 2026-01-05 01:09:19.702429 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-05 01:09:19.702455 | orchestrator | Monday 05 January 2026 01:06:14 +0000 (0:00:01.567) 0:00:03.501 ******** 2026-01-05 01:09:19.702467 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:09:19.702477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702579 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702664 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702724 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:09:19.702774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.702833 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.702875 | orchestrator | 2026-01-05 01:09:19.702881 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 01:09:19.702889 | orchestrator | Monday 05 January 2026 01:06:17 +0000 (0:00:03.077) 0:00:06.579 ******** 2026-01-05 01:09:19.702896 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:09:19.702903 | orchestrator | 2026-01-05 01:09:19.702910 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-05 01:09:19.702916 | orchestrator | Monday 05 January 2026 01:06:19 +0000 (0:00:01.575) 0:00:08.154 ******** 2026-01-05 01:09:19.702924 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:09:19.702936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.702999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.703006 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.703013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703175 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703215 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:09:19.703235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.703252 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.703669 | orchestrator | 2026-01-05 01:09:19.703674 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-05 01:09:19.703678 | orchestrator | Monday 05 January 2026 01:06:25 +0000 (0:00:06.001) 0:00:14.156 ******** 2026-01-05 01:09:19.703683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703713 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.703718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703775 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 01:09:19.703781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703794 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 01:09:19.703827 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.703852 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.703856 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.703860 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.703867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703887 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.703893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703913 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.703920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703951 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.703958 | orchestrator | 2026-01-05 01:09:19.703965 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-05 01:09:19.703971 | orchestrator | Monday 05 January 2026 01:06:27 +0000 (0:00:01.659) 0:00:15.815 ******** 2026-01-05 01:09:19.703976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 01:09:19.703980 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.703984 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.703989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 01:09:19.703993 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.704060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704132 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.704137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.704141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704165 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.704169 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.704172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.704176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 01:09:19.704196 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.704204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.704209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704217 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.704221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.704224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704233 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.704238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 01:09:19.704248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 01:09:19.704266 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.704273 | orchestrator | 2026-01-05 01:09:19.704279 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-05 01:09:19.704285 | orchestrator | Monday 05 January 2026 01:06:29 +0000 (0:00:02.074) 0:00:17.889 ******** 2026-01-05 01:09:19.704293 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:09:19.704319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704382 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.704391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:09:19.704463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.704488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.704510 | orchestrator | 2026-01-05 01:09:19.704514 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-05 01:09:19.704519 | orchestrator | Monday 05 January 2026 01:06:35 +0000 (0:00:06.037) 0:00:23.926 ******** 2026-01-05 01:09:19.704523 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:09:19.704528 | orchestrator | 2026-01-05 01:09:19.704532 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-05 01:09:19.704540 | orchestrator | Monday 05 January 2026 01:06:36 +0000 (0:00:01.134) 0:00:25.061 ******** 2026-01-05 01:09:19.704558 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704582 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704592 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704597 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704603 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704610 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.704615 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704619 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704623 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1314110, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.045895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704631 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704635 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704639 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704646 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704658 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704665 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704670 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704690 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704698 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704702 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1314124, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0518951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.704706 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704710 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704724 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704731 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.704739 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705146 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705172 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705186 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1314106, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705193 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705216 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705226 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705239 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705246 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705253 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705259 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705266 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705284 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705295 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705302 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705309 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705316 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705323 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705341 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705422 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705432 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705439 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705446 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705464 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705478 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705485 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705498 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705505 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705512 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705524 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705537 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705551 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705557 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705565 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705572 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705584 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1314117, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705594 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705598 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705602 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705605 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705610 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705614 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705628 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705632 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705636 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705640 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705644 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705648 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705653 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705669 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705674 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705678 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705688 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705693 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705697 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705706 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.705714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1314102, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705719 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705724 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705728 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705733 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705738 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705762 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705767 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.705774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705779 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705783 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705788 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705795 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705803 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705817 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705832 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705840 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.705846 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705853 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.705860 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705873 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1314112, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0469089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705886 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705893 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.705900 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705918 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 01:09:19.705925 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.705932 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1314116, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0498953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705940 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1314113, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0473282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705947 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1314109, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.042895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705959 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314122, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0512931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705966 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314099, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0394075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705976 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1314134, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0538952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1314121, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0508952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705990 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1314104, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0408952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.705998 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1314100, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.039895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.706005 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1314115, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0495324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.706073 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1314114, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0488951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.706085 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1314132, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0528953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 01:09:19.706092 | orchestrator | 2026-01-05 01:09:19.706099 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-05 01:09:19.706106 | orchestrator | Monday 05 January 2026 01:07:01 +0000 (0:00:25.106) 0:00:50.167 ******** 2026-01-05 01:09:19.706112 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:09:19.706119 | orchestrator | 2026-01-05 01:09:19.706130 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-05 01:09:19.706138 | orchestrator | Monday 05 January 2026 01:07:02 +0000 (0:00:00.766) 0:00:50.933 ******** 2026-01-05 01:09:19.706144 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706157 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706169 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706176 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:09:19.706182 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706194 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706207 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706213 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:09:19.706220 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706232 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706244 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706250 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706262 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706281 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706288 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706295 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706301 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706313 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706319 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706326 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706332 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706344 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706350 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.706356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706362 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-05 01:09:19.706368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 01:09:19.706375 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-05 01:09:19.706380 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:09:19.706386 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:09:19.706393 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:09:19.706399 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:09:19.706430 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:09:19.706437 | orchestrator | 2026-01-05 01:09:19.706443 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-05 01:09:19.706449 | orchestrator | Monday 05 January 2026 01:07:04 +0000 (0:00:01.813) 0:00:52.747 ******** 2026-01-05 01:09:19.706456 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:09:19.706462 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.706469 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:09:19.706475 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.706482 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:09:19.706488 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.706495 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:09:19.706501 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.706507 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:09:19.706514 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.706520 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 01:09:19.706526 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.706532 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-05 01:09:19.706539 | orchestrator | 2026-01-05 01:09:19.706545 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-05 01:09:19.706551 | orchestrator | Monday 05 January 2026 01:07:19 +0000 (0:00:15.869) 0:01:08.616 ******** 2026-01-05 01:09:19.706557 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:09:19.706578 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:09:19.706585 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.706598 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.706604 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:09:19.706610 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.706617 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:09:19.706623 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.706629 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:09:19.706635 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.706642 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 01:09:19.706648 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.706654 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-05 01:09:19.706661 | orchestrator | 2026-01-05 01:09:19.706667 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-05 01:09:19.706673 | orchestrator | Monday 05 January 2026 01:07:23 +0000 (0:00:03.085) 0:01:11.702 ******** 2026-01-05 01:09:19.706679 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:09:19.706687 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.706693 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:09:19.706699 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.706705 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:09:19.706711 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.706718 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:09:19.706724 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.706730 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:09:19.706736 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.706743 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 01:09:19.706750 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.706756 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-05 01:09:19.706762 | orchestrator | 2026-01-05 01:09:19.706768 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-05 01:09:19.706774 | orchestrator | Monday 05 January 2026 01:07:24 +0000 (0:00:01.694) 0:01:13.396 ******** 2026-01-05 01:09:19.706779 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:09:19.706785 | orchestrator | 2026-01-05 01:09:19.706791 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-05 01:09:19.706797 | orchestrator | Monday 05 January 2026 01:07:25 +0000 (0:00:00.883) 0:01:14.280 ******** 2026-01-05 01:09:19.706804 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.706810 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.706816 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.706823 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.706829 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.706836 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.706842 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.706848 | orchestrator | 2026-01-05 01:09:19.706854 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-05 01:09:19.706861 | orchestrator | Monday 05 January 2026 01:07:26 +0000 (0:00:00.753) 0:01:15.033 ******** 2026-01-05 01:09:19.706872 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.706878 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.706885 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.706891 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.706897 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.706903 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.706909 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.706915 | orchestrator | 2026-01-05 01:09:19.706922 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-05 01:09:19.706928 | orchestrator | Monday 05 January 2026 01:07:28 +0000 (0:00:02.482) 0:01:17.516 ******** 2026-01-05 01:09:19.706934 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.706940 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.706944 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.706948 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.706952 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.706959 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.706965 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.706975 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.706989 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.706994 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.707000 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.707006 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.707012 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 01:09:19.707017 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.707022 | orchestrator | 2026-01-05 01:09:19.707028 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-05 01:09:19.707072 | orchestrator | Monday 05 January 2026 01:07:31 +0000 (0:00:02.183) 0:01:19.699 ******** 2026-01-05 01:09:19.707080 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:09:19.707086 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.707093 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:09:19.707099 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.707105 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:09:19.707111 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.707117 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:09:19.707123 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.707130 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:09:19.707137 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 01:09:19.707144 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.707150 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.707156 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-05 01:09:19.707162 | orchestrator | 2026-01-05 01:09:19.707168 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-05 01:09:19.707175 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:01.522) 0:01:21.222 ******** 2026-01-05 01:09:19.707185 | orchestrator | [WARNING]: Skipped 2026-01-05 01:09:19.707189 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-05 01:09:19.707193 | orchestrator | due to this access issue: 2026-01-05 01:09:19.707197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-05 01:09:19.707201 | orchestrator | not a directory 2026-01-05 01:09:19.707205 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 01:09:19.707209 | orchestrator | 2026-01-05 01:09:19.707213 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-05 01:09:19.707217 | orchestrator | Monday 05 January 2026 01:07:33 +0000 (0:00:01.445) 0:01:22.668 ******** 2026-01-05 01:09:19.707221 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.707225 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.707229 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.707232 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.707236 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.707240 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.707244 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.707247 | orchestrator | 2026-01-05 01:09:19.707251 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-05 01:09:19.707255 | orchestrator | Monday 05 January 2026 01:07:34 +0000 (0:00:00.988) 0:01:23.656 ******** 2026-01-05 01:09:19.707259 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.707263 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:19.707267 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:19.707270 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:19.707274 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:09:19.707278 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:09:19.707282 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:09:19.707286 | orchestrator | 2026-01-05 01:09:19.707289 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-05 01:09:19.707293 | orchestrator | Monday 05 January 2026 01:07:35 +0000 (0:00:00.992) 0:01:24.649 ******** 2026-01-05 01:09:19.707298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707317 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 01:09:19.707326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707350 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 01:09:19.707354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707444 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 01:09:19.707452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 01:09:19.707475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:19.707491 | orchestrator | 2026-01-05 01:09:19.707495 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-05 01:09:19.707499 | orchestrator | Monday 05 January 2026 01:07:40 +0000 (0:00:04.243) 0:01:28.893 ******** 2026-01-05 01:09:19.707503 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 01:09:19.707507 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:09:19.707511 | orchestrator | 2026-01-05 01:09:19.707515 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707518 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:01.108) 0:01:30.001 ******** 2026-01-05 01:09:19.707522 | orchestrator | 2026-01-05 01:09:19.707526 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707530 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.063) 0:01:30.065 ******** 2026-01-05 01:09:19.707533 | orchestrator | 2026-01-05 01:09:19.707537 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707541 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.062) 0:01:30.127 ******** 2026-01-05 01:09:19.707545 | orchestrator | 2026-01-05 01:09:19.707549 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707553 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.058) 0:01:30.185 ******** 2026-01-05 01:09:19.707557 | orchestrator | 2026-01-05 01:09:19.707561 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707565 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.176) 0:01:30.362 ******** 2026-01-05 01:09:19.707568 | orchestrator | 2026-01-05 01:09:19.707572 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707584 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.060) 0:01:30.423 ******** 2026-01-05 01:09:19.707588 | orchestrator | 2026-01-05 01:09:19.707592 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 01:09:19.707595 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.063) 0:01:30.486 ******** 2026-01-05 01:09:19.707599 | orchestrator | 2026-01-05 01:09:19.707603 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-05 01:09:19.707607 | orchestrator | Monday 05 January 2026 01:07:41 +0000 (0:00:00.081) 0:01:30.567 ******** 2026-01-05 01:09:19.707610 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.707614 | orchestrator | 2026-01-05 01:09:19.707618 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-05 01:09:19.707624 | orchestrator | Monday 05 January 2026 01:08:05 +0000 (0:00:23.210) 0:01:53.778 ******** 2026-01-05 01:09:19.707628 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.707632 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:09:19.707636 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.707639 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:09:19.707643 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:09:19.707647 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.707651 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.707654 | orchestrator | 2026-01-05 01:09:19.707658 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-05 01:09:19.707662 | orchestrator | Monday 05 January 2026 01:08:15 +0000 (0:00:09.995) 0:02:03.773 ******** 2026-01-05 01:09:19.707666 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.707669 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.707673 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.707677 | orchestrator | 2026-01-05 01:09:19.707681 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-05 01:09:19.707684 | orchestrator | Monday 05 January 2026 01:08:20 +0000 (0:00:05.391) 0:02:09.165 ******** 2026-01-05 01:09:19.707688 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.707692 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.707696 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.707699 | orchestrator | 2026-01-05 01:09:19.707703 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-05 01:09:19.707707 | orchestrator | Monday 05 January 2026 01:08:25 +0000 (0:00:05.154) 0:02:14.320 ******** 2026-01-05 01:09:19.707711 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.707714 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.707718 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.707722 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.707726 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:09:19.707730 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:09:19.707733 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:09:19.707737 | orchestrator | 2026-01-05 01:09:19.707741 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-05 01:09:19.707745 | orchestrator | Monday 05 January 2026 01:08:39 +0000 (0:00:13.893) 0:02:28.214 ******** 2026-01-05 01:09:19.707749 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.707753 | orchestrator | 2026-01-05 01:09:19.707756 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-05 01:09:19.707760 | orchestrator | Monday 05 January 2026 01:08:48 +0000 (0:00:08.983) 0:02:37.199 ******** 2026-01-05 01:09:19.707764 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:19.707768 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:19.707771 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:19.707775 | orchestrator | 2026-01-05 01:09:19.707779 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-05 01:09:19.707783 | orchestrator | Monday 05 January 2026 01:08:59 +0000 (0:00:10.524) 0:02:47.723 ******** 2026-01-05 01:09:19.707786 | orchestrator | changed: [testbed-manager] 2026-01-05 01:09:19.707795 | orchestrator | 2026-01-05 01:09:19.707798 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-05 01:09:19.707802 | orchestrator | Monday 05 January 2026 01:09:05 +0000 (0:00:06.944) 0:02:54.668 ******** 2026-01-05 01:09:19.707806 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:09:19.707810 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:09:19.707814 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:09:19.707817 | orchestrator | 2026-01-05 01:09:19.707821 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:09:19.707825 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 01:09:19.707830 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:09:19.707834 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:09:19.707838 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:09:19.707842 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:09:19.707846 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:09:19.707850 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:09:19.707854 | orchestrator | 2026-01-05 01:09:19.707858 | orchestrator | 2026-01-05 01:09:19.707862 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:09:19.707866 | orchestrator | Monday 05 January 2026 01:09:16 +0000 (0:00:10.388) 0:03:05.057 ******** 2026-01-05 01:09:19.707870 | orchestrator | =============================================================================== 2026-01-05 01:09:19.707874 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.11s 2026-01-05 01:09:19.707877 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.21s 2026-01-05 01:09:19.707881 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.87s 2026-01-05 01:09:19.707885 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.89s 2026-01-05 01:09:19.707890 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.52s 2026-01-05 01:09:19.707898 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.39s 2026-01-05 01:09:19.707905 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 10.00s 2026-01-05 01:09:19.707910 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.99s 2026-01-05 01:09:19.707914 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.94s 2026-01-05 01:09:19.707918 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.04s 2026-01-05 01:09:19.707922 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.00s 2026-01-05 01:09:19.707926 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.39s 2026-01-05 01:09:19.707930 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.16s 2026-01-05 01:09:19.707934 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.24s 2026-01-05 01:09:19.707937 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.09s 2026-01-05 01:09:19.707941 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.08s 2026-01-05 01:09:19.707945 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2026-01-05 01:09:19.707952 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.18s 2026-01-05 01:09:19.707955 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.07s 2026-01-05 01:09:19.707960 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.81s 2026-01-05 01:09:19.707963 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:19.708030 | orchestrator | 2026-01-05 01:09:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:19.708057 | orchestrator | 2026-01-05 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:22.780542 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:22.781683 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:22.783410 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:22.784549 | orchestrator | 2026-01-05 01:09:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:22.784610 | orchestrator | 2026-01-05 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:25.839493 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:25.840380 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:25.842390 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:25.843570 | orchestrator | 2026-01-05 01:09:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:25.843661 | orchestrator | 2026-01-05 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:28.887581 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:28.888313 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:28.890309 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:28.891869 | orchestrator | 2026-01-05 01:09:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:28.892172 | orchestrator | 2026-01-05 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:31.946842 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:31.948596 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:31.950267 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:31.951925 | orchestrator | 2026-01-05 01:09:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:31.951980 | orchestrator | 2026-01-05 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:35.015061 | orchestrator | 2026-01-05 01:09:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:35.016752 | orchestrator | 2026-01-05 01:09:35 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:35.017830 | orchestrator | 2026-01-05 01:09:35 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:35.020980 | orchestrator | 2026-01-05 01:09:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:35.021056 | orchestrator | 2026-01-05 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:38.067074 | orchestrator | 2026-01-05 01:09:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:38.069212 | orchestrator | 2026-01-05 01:09:38 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:38.070298 | orchestrator | 2026-01-05 01:09:38 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:38.071324 | orchestrator | 2026-01-05 01:09:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:38.071362 | orchestrator | 2026-01-05 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:41.112175 | orchestrator | 2026-01-05 01:09:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:41.112670 | orchestrator | 2026-01-05 01:09:41 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:41.114837 | orchestrator | 2026-01-05 01:09:41 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:41.114889 | orchestrator | 2026-01-05 01:09:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:41.114903 | orchestrator | 2026-01-05 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:44.162535 | orchestrator | 2026-01-05 01:09:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:44.166405 | orchestrator | 2026-01-05 01:09:44 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:44.168947 | orchestrator | 2026-01-05 01:09:44 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:44.171248 | orchestrator | 2026-01-05 01:09:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:44.171543 | orchestrator | 2026-01-05 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:47.231140 | orchestrator | 2026-01-05 01:09:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:47.232554 | orchestrator | 2026-01-05 01:09:47 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:47.234939 | orchestrator | 2026-01-05 01:09:47 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:47.237329 | orchestrator | 2026-01-05 01:09:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:47.237374 | orchestrator | 2026-01-05 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:50.288563 | orchestrator | 2026-01-05 01:09:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:50.289796 | orchestrator | 2026-01-05 01:09:50 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:50.291648 | orchestrator | 2026-01-05 01:09:50 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:50.293281 | orchestrator | 2026-01-05 01:09:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:50.293326 | orchestrator | 2026-01-05 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:53.350253 | orchestrator | 2026-01-05 01:09:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:53.352041 | orchestrator | 2026-01-05 01:09:53 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:53.353638 | orchestrator | 2026-01-05 01:09:53 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:53.355379 | orchestrator | 2026-01-05 01:09:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:53.355412 | orchestrator | 2026-01-05 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:56.404455 | orchestrator | 2026-01-05 01:09:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:56.405323 | orchestrator | 2026-01-05 01:09:56 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:56.405594 | orchestrator | 2026-01-05 01:09:56 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:56.407532 | orchestrator | 2026-01-05 01:09:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:56.407617 | orchestrator | 2026-01-05 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:09:59.457577 | orchestrator | 2026-01-05 01:09:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:09:59.460901 | orchestrator | 2026-01-05 01:09:59 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:09:59.463226 | orchestrator | 2026-01-05 01:09:59 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:09:59.465623 | orchestrator | 2026-01-05 01:09:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:09:59.465686 | orchestrator | 2026-01-05 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:02.513741 | orchestrator | 2026-01-05 01:10:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:02.514236 | orchestrator | 2026-01-05 01:10:02 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:02.516818 | orchestrator | 2026-01-05 01:10:02 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:02.518967 | orchestrator | 2026-01-05 01:10:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:02.519090 | orchestrator | 2026-01-05 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:05.568975 | orchestrator | 2026-01-05 01:10:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:05.570859 | orchestrator | 2026-01-05 01:10:05 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:05.575037 | orchestrator | 2026-01-05 01:10:05 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:05.580864 | orchestrator | 2026-01-05 01:10:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:05.580950 | orchestrator | 2026-01-05 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:08.628858 | orchestrator | 2026-01-05 01:10:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:08.629714 | orchestrator | 2026-01-05 01:10:08 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:08.630944 | orchestrator | 2026-01-05 01:10:08 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:08.632902 | orchestrator | 2026-01-05 01:10:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:08.632967 | orchestrator | 2026-01-05 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:11.689486 | orchestrator | 2026-01-05 01:10:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:11.692310 | orchestrator | 2026-01-05 01:10:11 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:11.695129 | orchestrator | 2026-01-05 01:10:11 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:11.698175 | orchestrator | 2026-01-05 01:10:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:11.698259 | orchestrator | 2026-01-05 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:14.742741 | orchestrator | 2026-01-05 01:10:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:14.743837 | orchestrator | 2026-01-05 01:10:14 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:14.745515 | orchestrator | 2026-01-05 01:10:14 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:14.747537 | orchestrator | 2026-01-05 01:10:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:14.747654 | orchestrator | 2026-01-05 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:17.790730 | orchestrator | 2026-01-05 01:10:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:17.790815 | orchestrator | 2026-01-05 01:10:17 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:17.792029 | orchestrator | 2026-01-05 01:10:17 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:17.793357 | orchestrator | 2026-01-05 01:10:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:17.795466 | orchestrator | 2026-01-05 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:20.839854 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:20.842508 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:20.844721 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:20.846368 | orchestrator | 2026-01-05 01:10:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:20.846443 | orchestrator | 2026-01-05 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:23.900849 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:23.902093 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:23.905085 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:23.907061 | orchestrator | 2026-01-05 01:10:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:23.907172 | orchestrator | 2026-01-05 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:26.951689 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:26.954462 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:26.957378 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:26.959510 | orchestrator | 2026-01-05 01:10:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:26.959687 | orchestrator | 2026-01-05 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:30.012465 | orchestrator | 2026-01-05 01:10:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:30.014295 | orchestrator | 2026-01-05 01:10:30 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:30.016717 | orchestrator | 2026-01-05 01:10:30 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:30.021742 | orchestrator | 2026-01-05 01:10:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:30.021818 | orchestrator | 2026-01-05 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:33.067888 | orchestrator | 2026-01-05 01:10:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:33.070136 | orchestrator | 2026-01-05 01:10:33 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:33.072905 | orchestrator | 2026-01-05 01:10:33 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:33.074525 | orchestrator | 2026-01-05 01:10:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:33.074650 | orchestrator | 2026-01-05 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:36.124974 | orchestrator | 2026-01-05 01:10:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:36.127844 | orchestrator | 2026-01-05 01:10:36 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:36.129626 | orchestrator | 2026-01-05 01:10:36 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:36.131544 | orchestrator | 2026-01-05 01:10:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:36.131587 | orchestrator | 2026-01-05 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:39.186004 | orchestrator | 2026-01-05 01:10:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:39.187605 | orchestrator | 2026-01-05 01:10:39 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:39.189317 | orchestrator | 2026-01-05 01:10:39 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:39.191491 | orchestrator | 2026-01-05 01:10:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:39.191530 | orchestrator | 2026-01-05 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:42.245871 | orchestrator | 2026-01-05 01:10:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:42.248693 | orchestrator | 2026-01-05 01:10:42 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:42.252670 | orchestrator | 2026-01-05 01:10:42 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:42.255578 | orchestrator | 2026-01-05 01:10:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:42.255645 | orchestrator | 2026-01-05 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:45.301749 | orchestrator | 2026-01-05 01:10:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:45.303504 | orchestrator | 2026-01-05 01:10:45 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:45.304721 | orchestrator | 2026-01-05 01:10:45 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:45.306433 | orchestrator | 2026-01-05 01:10:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:45.306474 | orchestrator | 2026-01-05 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:48.360280 | orchestrator | 2026-01-05 01:10:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:48.363500 | orchestrator | 2026-01-05 01:10:48 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:48.366257 | orchestrator | 2026-01-05 01:10:48 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:48.368306 | orchestrator | 2026-01-05 01:10:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:48.368386 | orchestrator | 2026-01-05 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:51.414674 | orchestrator | 2026-01-05 01:10:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:51.417578 | orchestrator | 2026-01-05 01:10:51 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:51.421278 | orchestrator | 2026-01-05 01:10:51 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:51.423883 | orchestrator | 2026-01-05 01:10:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:51.424341 | orchestrator | 2026-01-05 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:54.472680 | orchestrator | 2026-01-05 01:10:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:54.474986 | orchestrator | 2026-01-05 01:10:54 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:54.477017 | orchestrator | 2026-01-05 01:10:54 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:54.479413 | orchestrator | 2026-01-05 01:10:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:54.479708 | orchestrator | 2026-01-05 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:10:57.522801 | orchestrator | 2026-01-05 01:10:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:10:57.525555 | orchestrator | 2026-01-05 01:10:57 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:10:57.527456 | orchestrator | 2026-01-05 01:10:57 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:10:57.531210 | orchestrator | 2026-01-05 01:10:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:10:57.531316 | orchestrator | 2026-01-05 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:00.573377 | orchestrator | 2026-01-05 01:11:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:00.574307 | orchestrator | 2026-01-05 01:11:00 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:00.575592 | orchestrator | 2026-01-05 01:11:00 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:00.577415 | orchestrator | 2026-01-05 01:11:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:00.577469 | orchestrator | 2026-01-05 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:03.624599 | orchestrator | 2026-01-05 01:11:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:03.625672 | orchestrator | 2026-01-05 01:11:03 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:03.627570 | orchestrator | 2026-01-05 01:11:03 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:03.628680 | orchestrator | 2026-01-05 01:11:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:03.628711 | orchestrator | 2026-01-05 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:06.667738 | orchestrator | 2026-01-05 01:11:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:06.668430 | orchestrator | 2026-01-05 01:11:06 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:06.669422 | orchestrator | 2026-01-05 01:11:06 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:06.671580 | orchestrator | 2026-01-05 01:11:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:06.671645 | orchestrator | 2026-01-05 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:09.729784 | orchestrator | 2026-01-05 01:11:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:09.732405 | orchestrator | 2026-01-05 01:11:09 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:09.733353 | orchestrator | 2026-01-05 01:11:09 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:09.735327 | orchestrator | 2026-01-05 01:11:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:09.735386 | orchestrator | 2026-01-05 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:12.784482 | orchestrator | 2026-01-05 01:11:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:12.786821 | orchestrator | 2026-01-05 01:11:12 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:12.789770 | orchestrator | 2026-01-05 01:11:12 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:12.791260 | orchestrator | 2026-01-05 01:11:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:12.791294 | orchestrator | 2026-01-05 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:15.841869 | orchestrator | 2026-01-05 01:11:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:15.843293 | orchestrator | 2026-01-05 01:11:15 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:15.845187 | orchestrator | 2026-01-05 01:11:15 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:15.846806 | orchestrator | 2026-01-05 01:11:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:15.846862 | orchestrator | 2026-01-05 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:18.889996 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:18.892548 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:18.894794 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state STARTED 2026-01-05 01:11:18.896780 | orchestrator | 2026-01-05 01:11:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:18.896823 | orchestrator | 2026-01-05 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:21.960431 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:21.962674 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:21.963047 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task 0a6d3b01-35d0-43e1-8a59-f0abd3d6ceaa is in state SUCCESS 2026-01-05 01:11:21.964212 | orchestrator | 2026-01-05 01:11:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:21.964244 | orchestrator | 2026-01-05 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:25.029713 | orchestrator | 2026-01-05 01:11:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:25.031671 | orchestrator | 2026-01-05 01:11:25 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:25.035675 | orchestrator | 2026-01-05 01:11:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:25.035730 | orchestrator | 2026-01-05 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:28.077100 | orchestrator | 2026-01-05 01:11:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:28.077496 | orchestrator | 2026-01-05 01:11:28 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:28.080292 | orchestrator | 2026-01-05 01:11:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:28.080340 | orchestrator | 2026-01-05 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:31.127540 | orchestrator | 2026-01-05 01:11:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:31.129818 | orchestrator | 2026-01-05 01:11:31 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:31.133643 | orchestrator | 2026-01-05 01:11:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:31.133735 | orchestrator | 2026-01-05 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:34.183345 | orchestrator | 2026-01-05 01:11:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:34.184568 | orchestrator | 2026-01-05 01:11:34 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:34.186628 | orchestrator | 2026-01-05 01:11:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:34.186671 | orchestrator | 2026-01-05 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:37.240856 | orchestrator | 2026-01-05 01:11:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:37.244399 | orchestrator | 2026-01-05 01:11:37 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:37.246670 | orchestrator | 2026-01-05 01:11:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:37.246749 | orchestrator | 2026-01-05 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:40.298982 | orchestrator | 2026-01-05 01:11:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:40.300355 | orchestrator | 2026-01-05 01:11:40 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:40.301626 | orchestrator | 2026-01-05 01:11:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:40.301660 | orchestrator | 2026-01-05 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:43.348344 | orchestrator | 2026-01-05 01:11:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:43.350249 | orchestrator | 2026-01-05 01:11:43 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:43.351263 | orchestrator | 2026-01-05 01:11:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:43.351499 | orchestrator | 2026-01-05 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:46.395816 | orchestrator | 2026-01-05 01:11:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:46.400186 | orchestrator | 2026-01-05 01:11:46 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:46.402539 | orchestrator | 2026-01-05 01:11:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:46.402594 | orchestrator | 2026-01-05 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:49.447485 | orchestrator | 2026-01-05 01:11:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:49.450399 | orchestrator | 2026-01-05 01:11:49 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state STARTED 2026-01-05 01:11:49.453118 | orchestrator | 2026-01-05 01:11:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:49.453451 | orchestrator | 2026-01-05 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:52.522138 | orchestrator | 2026-01-05 01:11:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:52.526500 | orchestrator | 2026-01-05 01:11:52 | INFO  | Task cdccc919-c502-4034-b51e-a701064f73f1 is in state SUCCESS 2026-01-05 01:11:52.528724 | orchestrator | 2026-01-05 01:11:52.528798 | orchestrator | 2026-01-05 01:11:52.528812 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-05 01:11:52.528823 | orchestrator | 2026-01-05 01:11:52.528833 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-05 01:11:52.528878 | orchestrator | Monday 05 January 2026 01:04:31 +0000 (0:00:00.097) 0:00:00.097 ******** 2026-01-05 01:11:52.528897 | orchestrator | changed: [localhost] 2026-01-05 01:11:52.528908 | orchestrator | 2026-01-05 01:11:52.528917 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-05 01:11:52.528926 | orchestrator | Monday 05 January 2026 01:04:32 +0000 (0:00:00.951) 0:00:01.049 ******** 2026-01-05 01:11:52.528936 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-05 01:11:52.528945 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-01-05 01:11:52.528954 | orchestrator | 2026-01-05 01:11:52.528963 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.528972 | orchestrator | 2026-01-05 01:11:52.528980 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.528989 | orchestrator | 2026-01-05 01:11:52.528998 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529007 | orchestrator | 2026-01-05 01:11:52.529015 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529024 | orchestrator | 2026-01-05 01:11:52.529033 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529043 | orchestrator | 2026-01-05 01:11:52.529052 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529090 | orchestrator | 2026-01-05 01:11:52.529106 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529119 | orchestrator | 2026-01-05 01:11:52.529133 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529222 | orchestrator | 2026-01-05 01:11:52.529232 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-05 01:11:52.529384 | orchestrator | changed: [localhost] 2026-01-05 01:11:52.529396 | orchestrator | 2026-01-05 01:11:52.529407 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-05 01:11:52.529417 | orchestrator | Monday 05 January 2026 01:11:06 +0000 (0:06:33.996) 0:06:35.045 ******** 2026-01-05 01:11:52.529427 | orchestrator | changed: [localhost] 2026-01-05 01:11:52.529438 | orchestrator | 2026-01-05 01:11:52.529448 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:11:52.529458 | orchestrator | 2026-01-05 01:11:52.529468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:11:52.529479 | orchestrator | Monday 05 January 2026 01:11:19 +0000 (0:00:12.726) 0:06:47.771 ******** 2026-01-05 01:11:52.529489 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:52.529499 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:52.529510 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:52.529520 | orchestrator | 2026-01-05 01:11:52.529529 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:11:52.529540 | orchestrator | Monday 05 January 2026 01:11:19 +0000 (0:00:00.370) 0:06:48.142 ******** 2026-01-05 01:11:52.529550 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-05 01:11:52.529561 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-05 01:11:52.529571 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-05 01:11:52.529581 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-05 01:11:52.529591 | orchestrator | 2026-01-05 01:11:52.529601 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-05 01:11:52.529612 | orchestrator | skipping: no hosts matched 2026-01-05 01:11:52.529623 | orchestrator | 2026-01-05 01:11:52.529634 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:52.529644 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:52.529670 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:52.529681 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:52.529690 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:11:52.529699 | orchestrator | 2026-01-05 01:11:52.529707 | orchestrator | 2026-01-05 01:11:52.529716 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:52.529725 | orchestrator | Monday 05 January 2026 01:11:20 +0000 (0:00:00.788) 0:06:48.931 ******** 2026-01-05 01:11:52.529734 | orchestrator | =============================================================================== 2026-01-05 01:11:52.529743 | orchestrator | Download ironic-agent initramfs --------------------------------------- 394.00s 2026-01-05 01:11:52.529752 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.73s 2026-01-05 01:11:52.529761 | orchestrator | Ensure the destination directory exists --------------------------------- 0.95s 2026-01-05 01:11:52.529770 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-01-05 01:11:52.529778 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-01-05 01:11:52.529787 | orchestrator | 2026-01-05 01:11:52.529796 | orchestrator | 2026-01-05 01:11:52.529866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:11:52.529877 | orchestrator | 2026-01-05 01:11:52.529886 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:11:52.529895 | orchestrator | Monday 05 January 2026 01:09:22 +0000 (0:00:00.313) 0:00:00.313 ******** 2026-01-05 01:11:52.529904 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:52.529930 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:52.529939 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:52.529948 | orchestrator | 2026-01-05 01:11:52.529957 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:11:52.529966 | orchestrator | Monday 05 January 2026 01:09:22 +0000 (0:00:00.330) 0:00:00.644 ******** 2026-01-05 01:11:52.529975 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-05 01:11:52.529984 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-05 01:11:52.529992 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-05 01:11:52.530001 | orchestrator | 2026-01-05 01:11:52.530010 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-05 01:11:52.530119 | orchestrator | 2026-01-05 01:11:52.530176 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 01:11:52.530195 | orchestrator | Monday 05 January 2026 01:09:22 +0000 (0:00:00.510) 0:00:01.155 ******** 2026-01-05 01:11:52.530209 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:52.530224 | orchestrator | 2026-01-05 01:11:52.530238 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-05 01:11:52.530253 | orchestrator | Monday 05 January 2026 01:09:23 +0000 (0:00:00.569) 0:00:01.724 ******** 2026-01-05 01:11:52.530273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530411 | orchestrator | 2026-01-05 01:11:52.530421 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-05 01:11:52.530440 | orchestrator | Monday 05 January 2026 01:09:24 +0000 (0:00:00.790) 0:00:02.515 ******** 2026-01-05 01:11:52.530449 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-05 01:11:52.530459 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-05 01:11:52.530467 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:11:52.530476 | orchestrator | 2026-01-05 01:11:52.530485 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 01:11:52.530494 | orchestrator | Monday 05 January 2026 01:09:25 +0000 (0:00:00.859) 0:00:03.375 ******** 2026-01-05 01:11:52.530502 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:11:52.530512 | orchestrator | 2026-01-05 01:11:52.530521 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-05 01:11:52.530529 | orchestrator | Monday 05 January 2026 01:09:25 +0000 (0:00:00.794) 0:00:04.170 ******** 2026-01-05 01:11:52.530636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530671 | orchestrator | 2026-01-05 01:11:52.530680 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-05 01:11:52.530689 | orchestrator | Monday 05 January 2026 01:09:27 +0000 (0:00:01.455) 0:00:05.625 ******** 2026-01-05 01:11:52.530698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:52.530707 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.530727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:52.530737 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.530746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:52.530755 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.530764 | orchestrator | 2026-01-05 01:11:52.530773 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-05 01:11:52.530781 | orchestrator | Monday 05 January 2026 01:09:27 +0000 (0:00:00.384) 0:00:06.009 ******** 2026-01-05 01:11:52.530809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:52.530819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:52.530828 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.530837 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.530908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 01:11:52.530917 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.530933 | orchestrator | 2026-01-05 01:11:52.530943 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-05 01:11:52.530951 | orchestrator | Monday 05 January 2026 01:09:28 +0000 (0:00:00.893) 0:00:06.903 ******** 2026-01-05 01:11:52.530965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.530993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.531002 | orchestrator | 2026-01-05 01:11:52.531011 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-05 01:11:52.531020 | orchestrator | Monday 05 January 2026 01:09:29 +0000 (0:00:01.282) 0:00:08.186 ******** 2026-01-05 01:11:52.531032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.531050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.531066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.531090 | orchestrator | 2026-01-05 01:11:52.531106 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-05 01:11:52.531122 | orchestrator | Monday 05 January 2026 01:09:31 +0000 (0:00:01.366) 0:00:09.553 ******** 2026-01-05 01:11:52.531138 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.531152 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.531168 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.531184 | orchestrator | 2026-01-05 01:11:52.531199 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-05 01:11:52.531214 | orchestrator | Monday 05 January 2026 01:09:31 +0000 (0:00:00.618) 0:00:10.171 ******** 2026-01-05 01:11:52.531230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:11:52.531240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:11:52.531249 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 01:11:52.531257 | orchestrator | 2026-01-05 01:11:52.531266 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-05 01:11:52.531275 | orchestrator | Monday 05 January 2026 01:09:33 +0000 (0:00:01.357) 0:00:11.529 ******** 2026-01-05 01:11:52.531284 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:11:52.531293 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:11:52.531302 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 01:11:52.531311 | orchestrator | 2026-01-05 01:11:52.531320 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-05 01:11:52.531329 | orchestrator | Monday 05 January 2026 01:09:34 +0000 (0:00:01.324) 0:00:12.854 ******** 2026-01-05 01:11:52.531337 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:11:52.531346 | orchestrator | 2026-01-05 01:11:52.531355 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-05 01:11:52.531364 | orchestrator | Monday 05 January 2026 01:09:35 +0000 (0:00:00.998) 0:00:13.853 ******** 2026-01-05 01:11:52.531372 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-05 01:11:52.531387 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-05 01:11:52.531397 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:52.531406 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:52.531414 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:52.531423 | orchestrator | 2026-01-05 01:11:52.531432 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-05 01:11:52.531441 | orchestrator | Monday 05 January 2026 01:09:36 +0000 (0:00:00.727) 0:00:14.580 ******** 2026-01-05 01:11:52.531450 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.531459 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.531467 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.531476 | orchestrator | 2026-01-05 01:11:52.531485 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-05 01:11:52.531494 | orchestrator | Monday 05 January 2026 01:09:36 +0000 (0:00:00.576) 0:00:15.156 ******** 2026-01-05 01:11:52.531509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1313929, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9419088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1313929, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9419088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1313929, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9419088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1313997, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9718943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1313997, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9718943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1313997, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9718943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313965, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.957894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313965, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.957894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313965, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.957894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1313999, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9758945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1313999, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9758945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1313999, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9758945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1313980, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9622803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1313980, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9622803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1313980, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9622803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1313992, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9668941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1313992, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9668941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1313992, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9668941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313927, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9396346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313927, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9396346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313927, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9396346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313949, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9456098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313949, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9456098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313949, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9456098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313970, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9588943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313970, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9588943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313970, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9588943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1313984, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9628942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1313984, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9628942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1313984, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9628942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1313996, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9688942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1313996, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9688942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1313996, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9688942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.531997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313952, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.945894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313952, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.945894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313952, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.945894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1313989, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9665442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1313989, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9665442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1313982, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9622803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1313989, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9665442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1313982, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9622803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1313977, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9614472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1313982, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9622803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1313977, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9614472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1313974, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9607086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1313974, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9607086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1313977, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9614472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1313988, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9648943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1313988, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9648943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1313974, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9607086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1313972, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9598942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1313972, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9598942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1313988, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9648943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1313994, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9678943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1313994, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9678943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1313972, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9598942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314087, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0335343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314087, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0335343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1313994, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9678943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314026, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9928946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314026, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9928946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1314087, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0335343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314009, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9828944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314009, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9828944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314026, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9928946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1314038, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9988947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1314038, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9988947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314009, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9828944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314001, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9768944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314001, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9768944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1314038, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9988947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314062, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.018895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314062, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.018895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314001, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9768944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314041, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.015895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314041, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.015895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1314062, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.018895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1314063, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0199533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1314063, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0199533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1314041, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.015895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314081, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.030224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314081, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.030224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1314063, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0199533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1314061, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0178947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1314061, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0178947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1314081, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.030224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314034, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9968946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314034, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9968946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1314061, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0178947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314019, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9888945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314019, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9888945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314032, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9938946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.532989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314032, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9938946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1314034, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9968946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314012, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9858944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314012, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9858944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314019, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9888945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1314036, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9968946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1314036, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9968946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314032, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9938946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314077, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0291386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314077, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0291386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314012, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9858944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314071, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.026895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314071, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.026895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1314036, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9968946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314003, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9798944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314003, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9798944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1314077, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0291386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314006, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9808943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314006, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9808943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1314071, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.026895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314060, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0178947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314060, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0178947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314003, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9798944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1314069, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0228949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1314069, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0228949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314006, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572397.9808943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1314060, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0178947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1314069, 'dev': 110, 'nlink': 1, 'atime': 1767571365.0, 'mtime': 1767571365.0, 'ctime': 1767572398.0228949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 01:11:52.533458 | orchestrator | 2026-01-05 01:11:52.533467 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-05 01:11:52.533476 | orchestrator | Monday 05 January 2026 01:10:17 +0000 (0:00:40.757) 0:00:55.914 ******** 2026-01-05 01:11:52.533491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.533507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.533528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 01:11:52.533544 | orchestrator | 2026-01-05 01:11:52.533561 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-05 01:11:52.533577 | orchestrator | Monday 05 January 2026 01:10:18 +0000 (0:00:01.067) 0:00:56.981 ******** 2026-01-05 01:11:52.533593 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:52.533610 | orchestrator | 2026-01-05 01:11:52.533626 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-05 01:11:52.533641 | orchestrator | Monday 05 January 2026 01:10:21 +0000 (0:00:02.551) 0:00:59.533 ******** 2026-01-05 01:11:52.533656 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:52.533665 | orchestrator | 2026-01-05 01:11:52.533674 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:11:52.533683 | orchestrator | Monday 05 January 2026 01:10:23 +0000 (0:00:02.651) 0:01:02.184 ******** 2026-01-05 01:11:52.533691 | orchestrator | 2026-01-05 01:11:52.533700 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:11:52.533709 | orchestrator | Monday 05 January 2026 01:10:23 +0000 (0:00:00.075) 0:01:02.260 ******** 2026-01-05 01:11:52.533718 | orchestrator | 2026-01-05 01:11:52.533727 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 01:11:52.533735 | orchestrator | Monday 05 January 2026 01:10:24 +0000 (0:00:00.073) 0:01:02.334 ******** 2026-01-05 01:11:52.533744 | orchestrator | 2026-01-05 01:11:52.533753 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-05 01:11:52.533768 | orchestrator | Monday 05 January 2026 01:10:24 +0000 (0:00:00.331) 0:01:02.665 ******** 2026-01-05 01:11:52.533777 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.533786 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.533794 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:11:52.533803 | orchestrator | 2026-01-05 01:11:52.533812 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-05 01:11:52.533820 | orchestrator | Monday 05 January 2026 01:10:26 +0000 (0:00:01.845) 0:01:04.511 ******** 2026-01-05 01:11:52.533836 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.533869 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.533879 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-05 01:11:52.533888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-05 01:11:52.533897 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-05 01:11:52.533906 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-05 01:11:52.533915 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:52.533924 | orchestrator | 2026-01-05 01:11:52.533933 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-05 01:11:52.533942 | orchestrator | Monday 05 January 2026 01:11:17 +0000 (0:00:51.704) 0:01:56.215 ******** 2026-01-05 01:11:52.533953 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.533969 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:11:52.533982 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:11:52.533995 | orchestrator | 2026-01-05 01:11:52.534008 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-05 01:11:52.534085 | orchestrator | Monday 05 January 2026 01:11:45 +0000 (0:00:27.810) 0:02:24.026 ******** 2026-01-05 01:11:52.534097 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:52.534106 | orchestrator | 2026-01-05 01:11:52.534115 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-05 01:11:52.534124 | orchestrator | Monday 05 January 2026 01:11:48 +0000 (0:00:02.481) 0:02:26.508 ******** 2026-01-05 01:11:52.534133 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.534142 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:52.534150 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:52.534159 | orchestrator | 2026-01-05 01:11:52.534168 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-05 01:11:52.534177 | orchestrator | Monday 05 January 2026 01:11:48 +0000 (0:00:00.620) 0:02:27.128 ******** 2026-01-05 01:11:52.534187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-05 01:11:52.534199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-05 01:11:52.534209 | orchestrator | 2026-01-05 01:11:52.534218 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-05 01:11:52.534226 | orchestrator | Monday 05 January 2026 01:11:51 +0000 (0:00:02.656) 0:02:29.784 ******** 2026-01-05 01:11:52.534235 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:52.534244 | orchestrator | 2026-01-05 01:11:52.534252 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:52.534268 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:11:52.534279 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:11:52.534288 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:11:52.534297 | orchestrator | 2026-01-05 01:11:52.534306 | orchestrator | 2026-01-05 01:11:52.534322 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:52.534331 | orchestrator | Monday 05 January 2026 01:11:51 +0000 (0:00:00.267) 0:02:30.052 ******** 2026-01-05 01:11:52.534340 | orchestrator | =============================================================================== 2026-01-05 01:11:52.534349 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.70s 2026-01-05 01:11:52.534357 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 40.76s 2026-01-05 01:11:52.534366 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.81s 2026-01-05 01:11:52.534375 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.66s 2026-01-05 01:11:52.534472 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.65s 2026-01-05 01:11:52.534484 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.55s 2026-01-05 01:11:52.534493 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.48s 2026-01-05 01:11:52.534512 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.85s 2026-01-05 01:11:52.534521 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2026-01-05 01:11:52.534531 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.37s 2026-01-05 01:11:52.534540 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.36s 2026-01-05 01:11:52.534549 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2026-01-05 01:11:52.534558 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.28s 2026-01-05 01:11:52.534567 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2026-01-05 01:11:52.534576 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.00s 2026-01-05 01:11:52.534584 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.89s 2026-01-05 01:11:52.534593 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2026-01-05 01:11:52.534602 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2026-01-05 01:11:52.534610 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.79s 2026-01-05 01:11:52.534619 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-01-05 01:11:52.534629 | orchestrator | 2026-01-05 01:11:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:52.534638 | orchestrator | 2026-01-05 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:55.584122 | orchestrator | 2026-01-05 01:11:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:55.585702 | orchestrator | 2026-01-05 01:11:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:55.585736 | orchestrator | 2026-01-05 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:11:58.631700 | orchestrator | 2026-01-05 01:11:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:11:58.632419 | orchestrator | 2026-01-05 01:11:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:11:58.632478 | orchestrator | 2026-01-05 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:01.674783 | orchestrator | 2026-01-05 01:12:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:01.675693 | orchestrator | 2026-01-05 01:12:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:01.675736 | orchestrator | 2026-01-05 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:04.720012 | orchestrator | 2026-01-05 01:12:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:04.720783 | orchestrator | 2026-01-05 01:12:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:04.720803 | orchestrator | 2026-01-05 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:07.776492 | orchestrator | 2026-01-05 01:12:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:07.777721 | orchestrator | 2026-01-05 01:12:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:07.777796 | orchestrator | 2026-01-05 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:10.828811 | orchestrator | 2026-01-05 01:12:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:10.830705 | orchestrator | 2026-01-05 01:12:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:10.830764 | orchestrator | 2026-01-05 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:13.879943 | orchestrator | 2026-01-05 01:12:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:13.881850 | orchestrator | 2026-01-05 01:12:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:13.882114 | orchestrator | 2026-01-05 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:16.936678 | orchestrator | 2026-01-05 01:12:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:16.939548 | orchestrator | 2026-01-05 01:12:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:16.940504 | orchestrator | 2026-01-05 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:19.993213 | orchestrator | 2026-01-05 01:12:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:19.994304 | orchestrator | 2026-01-05 01:12:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:19.994393 | orchestrator | 2026-01-05 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:23.057681 | orchestrator | 2026-01-05 01:12:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:23.060682 | orchestrator | 2026-01-05 01:12:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:23.060755 | orchestrator | 2026-01-05 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:26.110055 | orchestrator | 2026-01-05 01:12:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:26.111530 | orchestrator | 2026-01-05 01:12:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:26.111572 | orchestrator | 2026-01-05 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:29.166786 | orchestrator | 2026-01-05 01:12:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:29.170528 | orchestrator | 2026-01-05 01:12:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:29.170614 | orchestrator | 2026-01-05 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:32.217542 | orchestrator | 2026-01-05 01:12:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:32.219077 | orchestrator | 2026-01-05 01:12:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:32.219126 | orchestrator | 2026-01-05 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:35.260471 | orchestrator | 2026-01-05 01:12:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:35.263161 | orchestrator | 2026-01-05 01:12:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:35.263218 | orchestrator | 2026-01-05 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:38.311779 | orchestrator | 2026-01-05 01:12:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:38.313328 | orchestrator | 2026-01-05 01:12:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:38.313380 | orchestrator | 2026-01-05 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:41.371199 | orchestrator | 2026-01-05 01:12:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:41.374470 | orchestrator | 2026-01-05 01:12:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:41.374588 | orchestrator | 2026-01-05 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:44.430221 | orchestrator | 2026-01-05 01:12:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:44.432013 | orchestrator | 2026-01-05 01:12:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:44.432087 | orchestrator | 2026-01-05 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:47.493000 | orchestrator | 2026-01-05 01:12:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:47.495303 | orchestrator | 2026-01-05 01:12:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:47.495373 | orchestrator | 2026-01-05 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:50.550000 | orchestrator | 2026-01-05 01:12:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:50.555186 | orchestrator | 2026-01-05 01:12:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:50.555276 | orchestrator | 2026-01-05 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:53.607272 | orchestrator | 2026-01-05 01:12:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:53.608988 | orchestrator | 2026-01-05 01:12:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:53.609047 | orchestrator | 2026-01-05 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:56.658322 | orchestrator | 2026-01-05 01:12:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:56.659469 | orchestrator | 2026-01-05 01:12:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:56.659838 | orchestrator | 2026-01-05 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:12:59.713025 | orchestrator | 2026-01-05 01:12:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:12:59.715574 | orchestrator | 2026-01-05 01:12:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:12:59.715619 | orchestrator | 2026-01-05 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:02.764061 | orchestrator | 2026-01-05 01:13:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:02.765464 | orchestrator | 2026-01-05 01:13:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:02.766148 | orchestrator | 2026-01-05 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:05.811203 | orchestrator | 2026-01-05 01:13:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:05.816121 | orchestrator | 2026-01-05 01:13:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:05.816202 | orchestrator | 2026-01-05 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:08.866206 | orchestrator | 2026-01-05 01:13:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:08.867692 | orchestrator | 2026-01-05 01:13:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:08.867819 | orchestrator | 2026-01-05 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:11.914254 | orchestrator | 2026-01-05 01:13:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:11.915672 | orchestrator | 2026-01-05 01:13:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:11.915705 | orchestrator | 2026-01-05 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:14.966601 | orchestrator | 2026-01-05 01:13:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:14.967679 | orchestrator | 2026-01-05 01:13:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:14.967795 | orchestrator | 2026-01-05 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:18.019215 | orchestrator | 2026-01-05 01:13:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:18.021432 | orchestrator | 2026-01-05 01:13:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:18.021491 | orchestrator | 2026-01-05 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:21.063407 | orchestrator | 2026-01-05 01:13:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:21.064279 | orchestrator | 2026-01-05 01:13:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:21.064331 | orchestrator | 2026-01-05 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:24.114719 | orchestrator | 2026-01-05 01:13:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:24.116322 | orchestrator | 2026-01-05 01:13:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:24.116365 | orchestrator | 2026-01-05 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:27.172433 | orchestrator | 2026-01-05 01:13:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:27.175623 | orchestrator | 2026-01-05 01:13:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:27.175689 | orchestrator | 2026-01-05 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:30.229977 | orchestrator | 2026-01-05 01:13:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:30.231023 | orchestrator | 2026-01-05 01:13:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:30.231076 | orchestrator | 2026-01-05 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:33.282210 | orchestrator | 2026-01-05 01:13:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:33.284409 | orchestrator | 2026-01-05 01:13:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:33.285463 | orchestrator | 2026-01-05 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:36.334160 | orchestrator | 2026-01-05 01:13:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:36.336546 | orchestrator | 2026-01-05 01:13:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:36.336634 | orchestrator | 2026-01-05 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:39.388190 | orchestrator | 2026-01-05 01:13:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:39.389143 | orchestrator | 2026-01-05 01:13:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:39.389215 | orchestrator | 2026-01-05 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:42.438869 | orchestrator | 2026-01-05 01:13:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:42.440104 | orchestrator | 2026-01-05 01:13:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:42.440208 | orchestrator | 2026-01-05 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:45.490756 | orchestrator | 2026-01-05 01:13:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:45.492640 | orchestrator | 2026-01-05 01:13:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:45.492825 | orchestrator | 2026-01-05 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:48.544565 | orchestrator | 2026-01-05 01:13:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:48.546785 | orchestrator | 2026-01-05 01:13:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:48.546925 | orchestrator | 2026-01-05 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:51.589762 | orchestrator | 2026-01-05 01:13:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:51.591073 | orchestrator | 2026-01-05 01:13:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:51.591183 | orchestrator | 2026-01-05 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:54.636631 | orchestrator | 2026-01-05 01:13:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:54.637507 | orchestrator | 2026-01-05 01:13:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:54.637559 | orchestrator | 2026-01-05 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:13:57.690996 | orchestrator | 2026-01-05 01:13:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:13:57.692864 | orchestrator | 2026-01-05 01:13:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:13:57.693374 | orchestrator | 2026-01-05 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:00.746190 | orchestrator | 2026-01-05 01:14:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:00.747935 | orchestrator | 2026-01-05 01:14:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:00.747980 | orchestrator | 2026-01-05 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:03.799863 | orchestrator | 2026-01-05 01:14:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:03.800727 | orchestrator | 2026-01-05 01:14:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:03.800794 | orchestrator | 2026-01-05 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:06.854159 | orchestrator | 2026-01-05 01:14:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:06.856968 | orchestrator | 2026-01-05 01:14:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:06.857035 | orchestrator | 2026-01-05 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:09.910944 | orchestrator | 2026-01-05 01:14:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:09.913876 | orchestrator | 2026-01-05 01:14:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:09.913929 | orchestrator | 2026-01-05 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:12.975333 | orchestrator | 2026-01-05 01:14:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:12.976627 | orchestrator | 2026-01-05 01:14:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:12.976901 | orchestrator | 2026-01-05 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:16.028254 | orchestrator | 2026-01-05 01:14:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:16.029335 | orchestrator | 2026-01-05 01:14:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:16.029414 | orchestrator | 2026-01-05 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:19.079132 | orchestrator | 2026-01-05 01:14:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:19.080992 | orchestrator | 2026-01-05 01:14:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:19.081068 | orchestrator | 2026-01-05 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:22.121909 | orchestrator | 2026-01-05 01:14:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:22.122861 | orchestrator | 2026-01-05 01:14:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:22.123173 | orchestrator | 2026-01-05 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:25.178505 | orchestrator | 2026-01-05 01:14:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:25.182222 | orchestrator | 2026-01-05 01:14:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:25.182360 | orchestrator | 2026-01-05 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:28.234084 | orchestrator | 2026-01-05 01:14:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:28.234978 | orchestrator | 2026-01-05 01:14:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:28.235026 | orchestrator | 2026-01-05 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:31.285223 | orchestrator | 2026-01-05 01:14:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:31.286294 | orchestrator | 2026-01-05 01:14:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:31.286450 | orchestrator | 2026-01-05 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:34.340384 | orchestrator | 2026-01-05 01:14:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:34.341690 | orchestrator | 2026-01-05 01:14:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:34.341767 | orchestrator | 2026-01-05 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:37.392366 | orchestrator | 2026-01-05 01:14:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:37.394579 | orchestrator | 2026-01-05 01:14:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:37.394667 | orchestrator | 2026-01-05 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:40.448071 | orchestrator | 2026-01-05 01:14:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:40.450384 | orchestrator | 2026-01-05 01:14:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:40.450445 | orchestrator | 2026-01-05 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:43.492533 | orchestrator | 2026-01-05 01:14:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:43.494005 | orchestrator | 2026-01-05 01:14:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:43.494102 | orchestrator | 2026-01-05 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:46.547553 | orchestrator | 2026-01-05 01:14:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:46.549036 | orchestrator | 2026-01-05 01:14:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:46.549112 | orchestrator | 2026-01-05 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:49.601543 | orchestrator | 2026-01-05 01:14:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:49.604421 | orchestrator | 2026-01-05 01:14:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:49.604519 | orchestrator | 2026-01-05 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:52.652874 | orchestrator | 2026-01-05 01:14:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:52.655886 | orchestrator | 2026-01-05 01:14:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:52.655930 | orchestrator | 2026-01-05 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:55.703347 | orchestrator | 2026-01-05 01:14:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:55.706068 | orchestrator | 2026-01-05 01:14:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:55.706124 | orchestrator | 2026-01-05 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:14:58.765506 | orchestrator | 2026-01-05 01:14:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:14:58.768924 | orchestrator | 2026-01-05 01:14:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:14:58.768989 | orchestrator | 2026-01-05 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:01.824719 | orchestrator | 2026-01-05 01:15:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:01.825989 | orchestrator | 2026-01-05 01:15:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:01.826066 | orchestrator | 2026-01-05 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:04.872699 | orchestrator | 2026-01-05 01:15:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:04.874339 | orchestrator | 2026-01-05 01:15:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:04.874544 | orchestrator | 2026-01-05 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:07.924077 | orchestrator | 2026-01-05 01:15:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:07.925294 | orchestrator | 2026-01-05 01:15:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:07.925346 | orchestrator | 2026-01-05 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:10.976331 | orchestrator | 2026-01-05 01:15:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:10.978716 | orchestrator | 2026-01-05 01:15:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:10.978791 | orchestrator | 2026-01-05 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:14.029057 | orchestrator | 2026-01-05 01:15:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:14.029682 | orchestrator | 2026-01-05 01:15:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:14.029887 | orchestrator | 2026-01-05 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:17.080802 | orchestrator | 2026-01-05 01:15:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:17.082724 | orchestrator | 2026-01-05 01:15:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:17.082801 | orchestrator | 2026-01-05 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:20.124871 | orchestrator | 2026-01-05 01:15:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:20.127177 | orchestrator | 2026-01-05 01:15:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:20.127250 | orchestrator | 2026-01-05 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:23.177203 | orchestrator | 2026-01-05 01:15:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:23.179100 | orchestrator | 2026-01-05 01:15:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:23.179152 | orchestrator | 2026-01-05 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:26.229029 | orchestrator | 2026-01-05 01:15:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:26.230189 | orchestrator | 2026-01-05 01:15:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:26.230229 | orchestrator | 2026-01-05 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:29.283163 | orchestrator | 2026-01-05 01:15:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:29.284896 | orchestrator | 2026-01-05 01:15:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:29.284965 | orchestrator | 2026-01-05 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:32.335118 | orchestrator | 2026-01-05 01:15:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:32.336706 | orchestrator | 2026-01-05 01:15:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:32.336774 | orchestrator | 2026-01-05 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:35.388168 | orchestrator | 2026-01-05 01:15:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:35.390655 | orchestrator | 2026-01-05 01:15:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:35.390726 | orchestrator | 2026-01-05 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:38.437879 | orchestrator | 2026-01-05 01:15:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:38.438861 | orchestrator | 2026-01-05 01:15:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:38.438915 | orchestrator | 2026-01-05 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:41.495049 | orchestrator | 2026-01-05 01:15:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:41.496756 | orchestrator | 2026-01-05 01:15:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:41.496794 | orchestrator | 2026-01-05 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:44.539225 | orchestrator | 2026-01-05 01:15:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:44.539464 | orchestrator | 2026-01-05 01:15:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:44.539488 | orchestrator | 2026-01-05 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:47.586408 | orchestrator | 2026-01-05 01:15:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:47.587783 | orchestrator | 2026-01-05 01:15:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:47.588084 | orchestrator | 2026-01-05 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:50.635492 | orchestrator | 2026-01-05 01:15:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:50.637010 | orchestrator | 2026-01-05 01:15:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:50.637139 | orchestrator | 2026-01-05 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:53.688495 | orchestrator | 2026-01-05 01:15:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:53.688878 | orchestrator | 2026-01-05 01:15:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:53.688916 | orchestrator | 2026-01-05 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:56.735915 | orchestrator | 2026-01-05 01:15:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:56.738249 | orchestrator | 2026-01-05 01:15:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:56.738337 | orchestrator | 2026-01-05 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:15:59.785420 | orchestrator | 2026-01-05 01:15:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:15:59.788083 | orchestrator | 2026-01-05 01:15:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:15:59.788231 | orchestrator | 2026-01-05 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:02.841641 | orchestrator | 2026-01-05 01:16:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:02.842812 | orchestrator | 2026-01-05 01:16:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:02.842883 | orchestrator | 2026-01-05 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:05.887957 | orchestrator | 2026-01-05 01:16:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:05.888151 | orchestrator | 2026-01-05 01:16:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:05.888170 | orchestrator | 2026-01-05 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:08.937460 | orchestrator | 2026-01-05 01:16:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:08.939714 | orchestrator | 2026-01-05 01:16:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:08.939841 | orchestrator | 2026-01-05 01:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:11.991062 | orchestrator | 2026-01-05 01:16:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:11.993143 | orchestrator | 2026-01-05 01:16:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:11.993207 | orchestrator | 2026-01-05 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:15.045923 | orchestrator | 2026-01-05 01:16:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:15.046691 | orchestrator | 2026-01-05 01:16:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:15.046730 | orchestrator | 2026-01-05 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:18.097583 | orchestrator | 2026-01-05 01:16:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:18.098216 | orchestrator | 2026-01-05 01:16:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:18.098261 | orchestrator | 2026-01-05 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:21.151809 | orchestrator | 2026-01-05 01:16:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:21.153673 | orchestrator | 2026-01-05 01:16:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:21.153732 | orchestrator | 2026-01-05 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:24.206390 | orchestrator | 2026-01-05 01:16:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:24.207064 | orchestrator | 2026-01-05 01:16:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:24.207107 | orchestrator | 2026-01-05 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:27.257238 | orchestrator | 2026-01-05 01:16:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:27.259463 | orchestrator | 2026-01-05 01:16:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:27.259560 | orchestrator | 2026-01-05 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:30.302395 | orchestrator | 2026-01-05 01:16:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:30.303678 | orchestrator | 2026-01-05 01:16:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:30.303724 | orchestrator | 2026-01-05 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:33.354872 | orchestrator | 2026-01-05 01:16:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:33.356400 | orchestrator | 2026-01-05 01:16:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:33.356450 | orchestrator | 2026-01-05 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:36.407556 | orchestrator | 2026-01-05 01:16:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:36.409324 | orchestrator | 2026-01-05 01:16:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:36.409636 | orchestrator | 2026-01-05 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:39.459889 | orchestrator | 2026-01-05 01:16:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:39.462316 | orchestrator | 2026-01-05 01:16:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:39.462536 | orchestrator | 2026-01-05 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:42.513983 | orchestrator | 2026-01-05 01:16:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:42.515383 | orchestrator | 2026-01-05 01:16:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:42.515806 | orchestrator | 2026-01-05 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:45.567346 | orchestrator | 2026-01-05 01:16:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:45.569321 | orchestrator | 2026-01-05 01:16:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:45.569395 | orchestrator | 2026-01-05 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:48.626858 | orchestrator | 2026-01-05 01:16:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:48.628772 | orchestrator | 2026-01-05 01:16:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:48.628892 | orchestrator | 2026-01-05 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:51.677725 | orchestrator | 2026-01-05 01:16:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:51.678590 | orchestrator | 2026-01-05 01:16:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:51.678632 | orchestrator | 2026-01-05 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:54.728686 | orchestrator | 2026-01-05 01:16:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:54.730244 | orchestrator | 2026-01-05 01:16:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:54.730294 | orchestrator | 2026-01-05 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:16:57.779198 | orchestrator | 2026-01-05 01:16:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:16:57.780263 | orchestrator | 2026-01-05 01:16:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:16:57.780290 | orchestrator | 2026-01-05 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:00.833297 | orchestrator | 2026-01-05 01:17:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:00.835044 | orchestrator | 2026-01-05 01:17:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:00.835106 | orchestrator | 2026-01-05 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:03.875272 | orchestrator | 2026-01-05 01:17:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:03.876651 | orchestrator | 2026-01-05 01:17:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:03.876777 | orchestrator | 2026-01-05 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:06.930469 | orchestrator | 2026-01-05 01:17:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:06.932540 | orchestrator | 2026-01-05 01:17:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:06.932672 | orchestrator | 2026-01-05 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:09.978492 | orchestrator | 2026-01-05 01:17:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:09.980109 | orchestrator | 2026-01-05 01:17:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:09.980149 | orchestrator | 2026-01-05 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:13.032002 | orchestrator | 2026-01-05 01:17:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:13.032394 | orchestrator | 2026-01-05 01:17:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:13.032422 | orchestrator | 2026-01-05 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:16.086751 | orchestrator | 2026-01-05 01:17:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:16.087885 | orchestrator | 2026-01-05 01:17:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:16.087930 | orchestrator | 2026-01-05 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:19.145141 | orchestrator | 2026-01-05 01:17:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:19.147218 | orchestrator | 2026-01-05 01:17:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:19.147278 | orchestrator | 2026-01-05 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:22.201583 | orchestrator | 2026-01-05 01:17:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:22.203886 | orchestrator | 2026-01-05 01:17:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:22.203993 | orchestrator | 2026-01-05 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:25.252727 | orchestrator | 2026-01-05 01:17:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:25.257806 | orchestrator | 2026-01-05 01:17:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:25.257862 | orchestrator | 2026-01-05 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:28.310963 | orchestrator | 2026-01-05 01:17:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:28.312971 | orchestrator | 2026-01-05 01:17:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:28.313045 | orchestrator | 2026-01-05 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:31.362333 | orchestrator | 2026-01-05 01:17:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:31.362772 | orchestrator | 2026-01-05 01:17:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:31.362814 | orchestrator | 2026-01-05 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:34.417734 | orchestrator | 2026-01-05 01:17:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:34.421222 | orchestrator | 2026-01-05 01:17:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:34.421333 | orchestrator | 2026-01-05 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:37.465728 | orchestrator | 2026-01-05 01:17:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:37.468762 | orchestrator | 2026-01-05 01:17:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:37.468876 | orchestrator | 2026-01-05 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:40.526555 | orchestrator | 2026-01-05 01:17:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:40.528325 | orchestrator | 2026-01-05 01:17:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:40.528378 | orchestrator | 2026-01-05 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:43.585951 | orchestrator | 2026-01-05 01:17:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:43.588884 | orchestrator | 2026-01-05 01:17:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:43.588965 | orchestrator | 2026-01-05 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:46.642314 | orchestrator | 2026-01-05 01:17:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:46.644278 | orchestrator | 2026-01-05 01:17:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:46.644331 | orchestrator | 2026-01-05 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:49.696502 | orchestrator | 2026-01-05 01:17:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:49.696683 | orchestrator | 2026-01-05 01:17:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:49.696699 | orchestrator | 2026-01-05 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:52.746001 | orchestrator | 2026-01-05 01:17:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:52.747765 | orchestrator | 2026-01-05 01:17:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:52.747849 | orchestrator | 2026-01-05 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:55.798288 | orchestrator | 2026-01-05 01:17:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:55.799639 | orchestrator | 2026-01-05 01:17:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:55.799701 | orchestrator | 2026-01-05 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:17:58.861202 | orchestrator | 2026-01-05 01:17:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:17:58.862404 | orchestrator | 2026-01-05 01:17:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:17:58.862699 | orchestrator | 2026-01-05 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:01.919580 | orchestrator | 2026-01-05 01:18:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:01.921747 | orchestrator | 2026-01-05 01:18:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:01.921810 | orchestrator | 2026-01-05 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:04.973357 | orchestrator | 2026-01-05 01:18:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:04.975100 | orchestrator | 2026-01-05 01:18:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:04.975165 | orchestrator | 2026-01-05 01:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:08.026875 | orchestrator | 2026-01-05 01:18:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:08.028336 | orchestrator | 2026-01-05 01:18:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:08.028416 | orchestrator | 2026-01-05 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:11.084522 | orchestrator | 2026-01-05 01:18:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:11.085793 | orchestrator | 2026-01-05 01:18:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:11.085950 | orchestrator | 2026-01-05 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:14.139896 | orchestrator | 2026-01-05 01:18:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:14.141867 | orchestrator | 2026-01-05 01:18:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:14.141930 | orchestrator | 2026-01-05 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:17.200145 | orchestrator | 2026-01-05 01:18:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:17.201626 | orchestrator | 2026-01-05 01:18:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:17.202113 | orchestrator | 2026-01-05 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:20.251590 | orchestrator | 2026-01-05 01:18:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:20.253011 | orchestrator | 2026-01-05 01:18:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:20.253061 | orchestrator | 2026-01-05 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:23.317081 | orchestrator | 2026-01-05 01:18:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:23.318069 | orchestrator | 2026-01-05 01:18:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:23.318118 | orchestrator | 2026-01-05 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:26.371401 | orchestrator | 2026-01-05 01:18:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:26.372475 | orchestrator | 2026-01-05 01:18:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:26.372546 | orchestrator | 2026-01-05 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:29.430327 | orchestrator | 2026-01-05 01:18:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:29.431383 | orchestrator | 2026-01-05 01:18:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:29.431547 | orchestrator | 2026-01-05 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:32.480157 | orchestrator | 2026-01-05 01:18:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:32.484425 | orchestrator | 2026-01-05 01:18:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:32.484502 | orchestrator | 2026-01-05 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:35.533794 | orchestrator | 2026-01-05 01:18:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:35.534860 | orchestrator | 2026-01-05 01:18:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:35.534938 | orchestrator | 2026-01-05 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:38.590738 | orchestrator | 2026-01-05 01:18:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:38.593193 | orchestrator | 2026-01-05 01:18:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:38.593242 | orchestrator | 2026-01-05 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:41.643732 | orchestrator | 2026-01-05 01:18:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:41.647182 | orchestrator | 2026-01-05 01:18:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:41.647244 | orchestrator | 2026-01-05 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:44.697078 | orchestrator | 2026-01-05 01:18:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:44.698463 | orchestrator | 2026-01-05 01:18:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:44.698551 | orchestrator | 2026-01-05 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:47.748789 | orchestrator | 2026-01-05 01:18:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:47.750422 | orchestrator | 2026-01-05 01:18:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:47.750497 | orchestrator | 2026-01-05 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:50.805611 | orchestrator | 2026-01-05 01:18:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:50.807932 | orchestrator | 2026-01-05 01:18:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:50.807993 | orchestrator | 2026-01-05 01:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:53.866627 | orchestrator | 2026-01-05 01:18:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:53.867175 | orchestrator | 2026-01-05 01:18:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:53.867198 | orchestrator | 2026-01-05 01:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:56.922938 | orchestrator | 2026-01-05 01:18:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:56.924150 | orchestrator | 2026-01-05 01:18:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:56.924235 | orchestrator | 2026-01-05 01:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:18:59.979606 | orchestrator | 2026-01-05 01:18:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:18:59.981455 | orchestrator | 2026-01-05 01:18:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:18:59.981551 | orchestrator | 2026-01-05 01:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:03.037789 | orchestrator | 2026-01-05 01:19:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:03.039722 | orchestrator | 2026-01-05 01:19:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:03.039783 | orchestrator | 2026-01-05 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:06.092136 | orchestrator | 2026-01-05 01:19:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:06.092317 | orchestrator | 2026-01-05 01:19:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:06.092421 | orchestrator | 2026-01-05 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:09.146252 | orchestrator | 2026-01-05 01:19:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:09.149561 | orchestrator | 2026-01-05 01:19:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:09.149613 | orchestrator | 2026-01-05 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:12.203885 | orchestrator | 2026-01-05 01:19:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:12.205485 | orchestrator | 2026-01-05 01:19:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:12.205527 | orchestrator | 2026-01-05 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:15.246665 | orchestrator | 2026-01-05 01:19:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:15.247879 | orchestrator | 2026-01-05 01:19:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:15.247940 | orchestrator | 2026-01-05 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:18.293990 | orchestrator | 2026-01-05 01:19:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:18.294512 | orchestrator | 2026-01-05 01:19:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:18.294585 | orchestrator | 2026-01-05 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:21.350477 | orchestrator | 2026-01-05 01:19:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:21.352707 | orchestrator | 2026-01-05 01:19:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:21.352813 | orchestrator | 2026-01-05 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:24.406159 | orchestrator | 2026-01-05 01:19:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:24.407662 | orchestrator | 2026-01-05 01:19:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:24.407712 | orchestrator | 2026-01-05 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:27.473472 | orchestrator | 2026-01-05 01:19:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:27.474572 | orchestrator | 2026-01-05 01:19:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:27.474606 | orchestrator | 2026-01-05 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:30.529403 | orchestrator | 2026-01-05 01:19:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:30.532782 | orchestrator | 2026-01-05 01:19:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:30.532907 | orchestrator | 2026-01-05 01:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:33.593866 | orchestrator | 2026-01-05 01:19:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:33.595372 | orchestrator | 2026-01-05 01:19:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:33.595454 | orchestrator | 2026-01-05 01:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:36.651564 | orchestrator | 2026-01-05 01:19:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:36.653371 | orchestrator | 2026-01-05 01:19:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:36.653475 | orchestrator | 2026-01-05 01:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:39.706250 | orchestrator | 2026-01-05 01:19:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:39.707925 | orchestrator | 2026-01-05 01:19:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:39.707991 | orchestrator | 2026-01-05 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:42.765434 | orchestrator | 2026-01-05 01:19:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:42.768781 | orchestrator | 2026-01-05 01:19:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:42.768886 | orchestrator | 2026-01-05 01:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:45.826220 | orchestrator | 2026-01-05 01:19:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:45.828346 | orchestrator | 2026-01-05 01:19:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:45.828440 | orchestrator | 2026-01-05 01:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:48.884477 | orchestrator | 2026-01-05 01:19:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:48.886307 | orchestrator | 2026-01-05 01:19:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:48.886395 | orchestrator | 2026-01-05 01:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:51.938684 | orchestrator | 2026-01-05 01:19:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:51.941801 | orchestrator | 2026-01-05 01:19:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:51.941911 | orchestrator | 2026-01-05 01:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:54.993340 | orchestrator | 2026-01-05 01:19:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:54.996276 | orchestrator | 2026-01-05 01:19:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:54.996443 | orchestrator | 2026-01-05 01:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:19:58.051687 | orchestrator | 2026-01-05 01:19:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:19:58.053424 | orchestrator | 2026-01-05 01:19:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:19:58.053466 | orchestrator | 2026-01-05 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:01.100119 | orchestrator | 2026-01-05 01:20:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:01.102612 | orchestrator | 2026-01-05 01:20:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:01.102684 | orchestrator | 2026-01-05 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:04.151033 | orchestrator | 2026-01-05 01:20:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:04.152172 | orchestrator | 2026-01-05 01:20:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:04.152239 | orchestrator | 2026-01-05 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:07.201998 | orchestrator | 2026-01-05 01:20:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:07.203561 | orchestrator | 2026-01-05 01:20:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:07.203674 | orchestrator | 2026-01-05 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:10.254615 | orchestrator | 2026-01-05 01:20:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:10.256214 | orchestrator | 2026-01-05 01:20:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:10.256361 | orchestrator | 2026-01-05 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:13.304211 | orchestrator | 2026-01-05 01:20:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:13.306385 | orchestrator | 2026-01-05 01:20:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:13.306469 | orchestrator | 2026-01-05 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:16.355478 | orchestrator | 2026-01-05 01:20:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:16.356894 | orchestrator | 2026-01-05 01:20:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:16.356944 | orchestrator | 2026-01-05 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:19.411962 | orchestrator | 2026-01-05 01:20:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:19.413113 | orchestrator | 2026-01-05 01:20:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:19.413188 | orchestrator | 2026-01-05 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:22.464240 | orchestrator | 2026-01-05 01:20:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:22.466010 | orchestrator | 2026-01-05 01:20:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:22.466205 | orchestrator | 2026-01-05 01:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:25.517065 | orchestrator | 2026-01-05 01:20:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:25.519954 | orchestrator | 2026-01-05 01:20:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:25.520056 | orchestrator | 2026-01-05 01:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:28.572670 | orchestrator | 2026-01-05 01:20:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:28.574866 | orchestrator | 2026-01-05 01:20:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:28.574922 | orchestrator | 2026-01-05 01:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:31.621490 | orchestrator | 2026-01-05 01:20:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:31.622961 | orchestrator | 2026-01-05 01:20:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:31.623093 | orchestrator | 2026-01-05 01:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:34.667094 | orchestrator | 2026-01-05 01:20:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:34.669280 | orchestrator | 2026-01-05 01:20:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:34.669433 | orchestrator | 2026-01-05 01:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:37.714185 | orchestrator | 2026-01-05 01:20:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:37.715623 | orchestrator | 2026-01-05 01:20:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:37.715669 | orchestrator | 2026-01-05 01:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:40.771729 | orchestrator | 2026-01-05 01:20:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:40.774620 | orchestrator | 2026-01-05 01:20:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:40.774697 | orchestrator | 2026-01-05 01:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:43.839174 | orchestrator | 2026-01-05 01:20:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:43.842147 | orchestrator | 2026-01-05 01:20:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:43.842203 | orchestrator | 2026-01-05 01:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:46.900160 | orchestrator | 2026-01-05 01:20:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:46.901674 | orchestrator | 2026-01-05 01:20:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:46.901754 | orchestrator | 2026-01-05 01:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:49.954277 | orchestrator | 2026-01-05 01:20:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:49.958880 | orchestrator | 2026-01-05 01:20:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:49.958982 | orchestrator | 2026-01-05 01:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:53.009457 | orchestrator | 2026-01-05 01:20:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:53.014382 | orchestrator | 2026-01-05 01:20:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:53.014479 | orchestrator | 2026-01-05 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:56.066116 | orchestrator | 2026-01-05 01:20:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:56.070868 | orchestrator | 2026-01-05 01:20:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:56.071048 | orchestrator | 2026-01-05 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:20:59.122531 | orchestrator | 2026-01-05 01:20:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:20:59.124390 | orchestrator | 2026-01-05 01:20:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:20:59.124481 | orchestrator | 2026-01-05 01:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:02.184570 | orchestrator | 2026-01-05 01:21:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:02.187212 | orchestrator | 2026-01-05 01:21:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:02.187321 | orchestrator | 2026-01-05 01:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:05.242584 | orchestrator | 2026-01-05 01:21:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:05.244360 | orchestrator | 2026-01-05 01:21:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:05.244474 | orchestrator | 2026-01-05 01:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:08.301093 | orchestrator | 2026-01-05 01:21:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:08.304155 | orchestrator | 2026-01-05 01:21:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:08.304221 | orchestrator | 2026-01-05 01:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:11.359118 | orchestrator | 2026-01-05 01:21:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:11.361437 | orchestrator | 2026-01-05 01:21:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:11.361518 | orchestrator | 2026-01-05 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:14.411890 | orchestrator | 2026-01-05 01:21:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:14.413915 | orchestrator | 2026-01-05 01:21:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:14.414147 | orchestrator | 2026-01-05 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:17.462382 | orchestrator | 2026-01-05 01:21:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:17.463881 | orchestrator | 2026-01-05 01:21:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:17.464044 | orchestrator | 2026-01-05 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:20.511536 | orchestrator | 2026-01-05 01:21:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:20.513105 | orchestrator | 2026-01-05 01:21:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:20.513178 | orchestrator | 2026-01-05 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:23.564274 | orchestrator | 2026-01-05 01:21:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:23.566643 | orchestrator | 2026-01-05 01:21:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:23.566721 | orchestrator | 2026-01-05 01:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:26.613632 | orchestrator | 2026-01-05 01:21:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:26.616229 | orchestrator | 2026-01-05 01:21:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:26.616377 | orchestrator | 2026-01-05 01:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:29.671125 | orchestrator | 2026-01-05 01:21:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:29.672789 | orchestrator | 2026-01-05 01:21:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:29.673026 | orchestrator | 2026-01-05 01:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:32.717598 | orchestrator | 2026-01-05 01:21:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:32.719920 | orchestrator | 2026-01-05 01:21:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:32.719980 | orchestrator | 2026-01-05 01:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:35.771744 | orchestrator | 2026-01-05 01:21:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:35.773107 | orchestrator | 2026-01-05 01:21:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:35.773297 | orchestrator | 2026-01-05 01:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:38.824707 | orchestrator | 2026-01-05 01:21:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:38.828637 | orchestrator | 2026-01-05 01:21:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:38.828766 | orchestrator | 2026-01-05 01:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:41.886542 | orchestrator | 2026-01-05 01:21:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:41.887707 | orchestrator | 2026-01-05 01:21:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:41.887754 | orchestrator | 2026-01-05 01:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:44.941209 | orchestrator | 2026-01-05 01:21:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:44.943642 | orchestrator | 2026-01-05 01:21:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:44.943888 | orchestrator | 2026-01-05 01:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:47.999048 | orchestrator | 2026-01-05 01:21:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:48.000985 | orchestrator | 2026-01-05 01:21:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:48.001050 | orchestrator | 2026-01-05 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:51.061819 | orchestrator | 2026-01-05 01:21:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:51.064953 | orchestrator | 2026-01-05 01:21:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:51.065043 | orchestrator | 2026-01-05 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:54.107900 | orchestrator | 2026-01-05 01:21:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:54.109573 | orchestrator | 2026-01-05 01:21:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:54.109705 | orchestrator | 2026-01-05 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:21:57.167004 | orchestrator | 2026-01-05 01:21:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:21:57.169471 | orchestrator | 2026-01-05 01:21:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:21:57.169518 | orchestrator | 2026-01-05 01:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:00.224050 | orchestrator | 2026-01-05 01:22:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:00.226541 | orchestrator | 2026-01-05 01:22:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:00.226598 | orchestrator | 2026-01-05 01:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:03.277758 | orchestrator | 2026-01-05 01:22:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:03.279311 | orchestrator | 2026-01-05 01:22:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:03.279392 | orchestrator | 2026-01-05 01:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:06.316850 | orchestrator | 2026-01-05 01:22:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:06.317677 | orchestrator | 2026-01-05 01:22:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:06.317703 | orchestrator | 2026-01-05 01:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:09.366602 | orchestrator | 2026-01-05 01:22:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:09.368052 | orchestrator | 2026-01-05 01:22:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:09.368096 | orchestrator | 2026-01-05 01:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:12.415889 | orchestrator | 2026-01-05 01:22:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:12.417570 | orchestrator | 2026-01-05 01:22:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:12.417603 | orchestrator | 2026-01-05 01:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:15.470937 | orchestrator | 2026-01-05 01:22:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:15.471841 | orchestrator | 2026-01-05 01:22:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:15.471885 | orchestrator | 2026-01-05 01:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:18.524493 | orchestrator | 2026-01-05 01:22:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:18.529754 | orchestrator | 2026-01-05 01:22:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:18.529852 | orchestrator | 2026-01-05 01:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:21.585088 | orchestrator | 2026-01-05 01:22:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:21.587098 | orchestrator | 2026-01-05 01:22:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:21.587198 | orchestrator | 2026-01-05 01:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:24.632670 | orchestrator | 2026-01-05 01:22:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:24.635010 | orchestrator | 2026-01-05 01:22:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:24.635891 | orchestrator | 2026-01-05 01:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:27.685072 | orchestrator | 2026-01-05 01:22:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:27.686230 | orchestrator | 2026-01-05 01:22:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:27.686276 | orchestrator | 2026-01-05 01:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:30.744110 | orchestrator | 2026-01-05 01:22:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:30.746380 | orchestrator | 2026-01-05 01:22:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:30.746614 | orchestrator | 2026-01-05 01:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:33.794287 | orchestrator | 2026-01-05 01:22:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:33.795682 | orchestrator | 2026-01-05 01:22:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:33.795756 | orchestrator | 2026-01-05 01:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:36.848448 | orchestrator | 2026-01-05 01:22:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:36.850936 | orchestrator | 2026-01-05 01:22:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:36.851006 | orchestrator | 2026-01-05 01:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:39.899238 | orchestrator | 2026-01-05 01:22:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:39.900955 | orchestrator | 2026-01-05 01:22:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:39.901041 | orchestrator | 2026-01-05 01:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:42.960016 | orchestrator | 2026-01-05 01:22:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:42.961936 | orchestrator | 2026-01-05 01:22:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:42.962006 | orchestrator | 2026-01-05 01:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:46.019331 | orchestrator | 2026-01-05 01:22:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:46.021901 | orchestrator | 2026-01-05 01:22:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:46.021970 | orchestrator | 2026-01-05 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:49.072005 | orchestrator | 2026-01-05 01:22:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:49.073004 | orchestrator | 2026-01-05 01:22:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:49.073051 | orchestrator | 2026-01-05 01:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:52.126935 | orchestrator | 2026-01-05 01:22:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:52.128935 | orchestrator | 2026-01-05 01:22:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:52.128977 | orchestrator | 2026-01-05 01:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:55.182006 | orchestrator | 2026-01-05 01:22:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:55.183152 | orchestrator | 2026-01-05 01:22:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:55.183211 | orchestrator | 2026-01-05 01:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:22:58.234633 | orchestrator | 2026-01-05 01:22:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:22:58.236321 | orchestrator | 2026-01-05 01:22:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:22:58.236377 | orchestrator | 2026-01-05 01:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:01.295586 | orchestrator | 2026-01-05 01:23:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:01.297118 | orchestrator | 2026-01-05 01:23:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:01.297160 | orchestrator | 2026-01-05 01:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:04.347944 | orchestrator | 2026-01-05 01:23:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:04.350435 | orchestrator | 2026-01-05 01:23:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:04.350508 | orchestrator | 2026-01-05 01:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:07.395191 | orchestrator | 2026-01-05 01:23:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:07.397099 | orchestrator | 2026-01-05 01:23:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:07.397149 | orchestrator | 2026-01-05 01:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:10.444225 | orchestrator | 2026-01-05 01:23:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:10.444981 | orchestrator | 2026-01-05 01:23:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:10.445027 | orchestrator | 2026-01-05 01:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:13.494145 | orchestrator | 2026-01-05 01:23:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:13.494813 | orchestrator | 2026-01-05 01:23:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:13.494858 | orchestrator | 2026-01-05 01:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:16.552196 | orchestrator | 2026-01-05 01:23:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:16.552557 | orchestrator | 2026-01-05 01:23:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:16.552589 | orchestrator | 2026-01-05 01:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:19.599655 | orchestrator | 2026-01-05 01:23:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:19.600827 | orchestrator | 2026-01-05 01:23:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:19.601066 | orchestrator | 2026-01-05 01:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:22.653707 | orchestrator | 2026-01-05 01:23:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:22.656173 | orchestrator | 2026-01-05 01:23:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:22.656246 | orchestrator | 2026-01-05 01:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:25.714441 | orchestrator | 2026-01-05 01:23:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:25.716491 | orchestrator | 2026-01-05 01:23:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:25.716562 | orchestrator | 2026-01-05 01:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:28.766958 | orchestrator | 2026-01-05 01:23:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:28.768905 | orchestrator | 2026-01-05 01:23:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:28.768970 | orchestrator | 2026-01-05 01:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:31.845932 | orchestrator | 2026-01-05 01:23:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:31.846112 | orchestrator | 2026-01-05 01:23:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:31.846140 | orchestrator | 2026-01-05 01:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:34.904065 | orchestrator | 2026-01-05 01:23:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:34.905202 | orchestrator | 2026-01-05 01:23:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:34.905257 | orchestrator | 2026-01-05 01:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:37.951479 | orchestrator | 2026-01-05 01:23:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:37.952553 | orchestrator | 2026-01-05 01:23:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:37.952590 | orchestrator | 2026-01-05 01:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:41.005396 | orchestrator | 2026-01-05 01:23:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:41.006667 | orchestrator | 2026-01-05 01:23:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:41.007326 | orchestrator | 2026-01-05 01:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:44.061594 | orchestrator | 2026-01-05 01:23:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:44.063723 | orchestrator | 2026-01-05 01:23:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:44.063780 | orchestrator | 2026-01-05 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:47.121787 | orchestrator | 2026-01-05 01:23:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:47.122890 | orchestrator | 2026-01-05 01:23:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:47.123006 | orchestrator | 2026-01-05 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:50.174653 | orchestrator | 2026-01-05 01:23:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:50.178205 | orchestrator | 2026-01-05 01:23:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:50.178306 | orchestrator | 2026-01-05 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:53.229297 | orchestrator | 2026-01-05 01:23:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:53.229801 | orchestrator | 2026-01-05 01:23:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:53.229850 | orchestrator | 2026-01-05 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:56.287039 | orchestrator | 2026-01-05 01:23:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:56.289533 | orchestrator | 2026-01-05 01:23:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:56.289592 | orchestrator | 2026-01-05 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:23:59.344913 | orchestrator | 2026-01-05 01:23:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:23:59.345824 | orchestrator | 2026-01-05 01:23:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:23:59.345857 | orchestrator | 2026-01-05 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:02.412066 | orchestrator | 2026-01-05 01:24:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:02.412173 | orchestrator | 2026-01-05 01:24:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:02.412188 | orchestrator | 2026-01-05 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:05.472647 | orchestrator | 2026-01-05 01:24:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:05.473839 | orchestrator | 2026-01-05 01:24:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:05.474177 | orchestrator | 2026-01-05 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:08.529918 | orchestrator | 2026-01-05 01:24:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:08.531089 | orchestrator | 2026-01-05 01:24:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:08.531127 | orchestrator | 2026-01-05 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:11.603795 | orchestrator | 2026-01-05 01:24:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:11.605261 | orchestrator | 2026-01-05 01:24:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:11.605402 | orchestrator | 2026-01-05 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:14.681912 | orchestrator | 2026-01-05 01:24:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:14.683432 | orchestrator | 2026-01-05 01:24:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:14.683480 | orchestrator | 2026-01-05 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:17.735704 | orchestrator | 2026-01-05 01:24:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:17.736965 | orchestrator | 2026-01-05 01:24:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:17.737128 | orchestrator | 2026-01-05 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:20.792991 | orchestrator | 2026-01-05 01:24:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:20.794835 | orchestrator | 2026-01-05 01:24:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:20.794998 | orchestrator | 2026-01-05 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:23.837682 | orchestrator | 2026-01-05 01:24:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:23.838215 | orchestrator | 2026-01-05 01:24:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:23.838245 | orchestrator | 2026-01-05 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:26.892820 | orchestrator | 2026-01-05 01:24:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:26.894277 | orchestrator | 2026-01-05 01:24:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:26.894338 | orchestrator | 2026-01-05 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:29.944474 | orchestrator | 2026-01-05 01:24:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:29.945504 | orchestrator | 2026-01-05 01:24:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:29.945559 | orchestrator | 2026-01-05 01:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:33.002485 | orchestrator | 2026-01-05 01:24:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:33.004749 | orchestrator | 2026-01-05 01:24:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:33.004845 | orchestrator | 2026-01-05 01:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:36.064618 | orchestrator | 2026-01-05 01:24:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:36.068457 | orchestrator | 2026-01-05 01:24:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:36.068570 | orchestrator | 2026-01-05 01:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:39.114542 | orchestrator | 2026-01-05 01:24:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:39.115614 | orchestrator | 2026-01-05 01:24:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:39.115867 | orchestrator | 2026-01-05 01:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:42.168877 | orchestrator | 2026-01-05 01:24:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:42.170754 | orchestrator | 2026-01-05 01:24:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:42.171011 | orchestrator | 2026-01-05 01:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:45.230338 | orchestrator | 2026-01-05 01:24:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:45.231639 | orchestrator | 2026-01-05 01:24:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:45.231720 | orchestrator | 2026-01-05 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:48.282963 | orchestrator | 2026-01-05 01:24:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:48.284555 | orchestrator | 2026-01-05 01:24:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:48.284772 | orchestrator | 2026-01-05 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:51.334305 | orchestrator | 2026-01-05 01:24:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:51.335504 | orchestrator | 2026-01-05 01:24:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:51.335545 | orchestrator | 2026-01-05 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:54.387077 | orchestrator | 2026-01-05 01:24:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:54.388558 | orchestrator | 2026-01-05 01:24:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:54.388644 | orchestrator | 2026-01-05 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:24:57.443621 | orchestrator | 2026-01-05 01:24:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:24:57.446009 | orchestrator | 2026-01-05 01:24:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:24:57.446165 | orchestrator | 2026-01-05 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:00.502047 | orchestrator | 2026-01-05 01:25:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:00.503709 | orchestrator | 2026-01-05 01:25:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:00.503773 | orchestrator | 2026-01-05 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:03.553408 | orchestrator | 2026-01-05 01:25:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:03.555683 | orchestrator | 2026-01-05 01:25:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:03.555813 | orchestrator | 2026-01-05 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:06.612095 | orchestrator | 2026-01-05 01:25:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:06.612810 | orchestrator | 2026-01-05 01:25:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:06.612837 | orchestrator | 2026-01-05 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:09.663399 | orchestrator | 2026-01-05 01:25:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:09.665375 | orchestrator | 2026-01-05 01:25:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:09.665456 | orchestrator | 2026-01-05 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:12.719367 | orchestrator | 2026-01-05 01:25:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:12.723180 | orchestrator | 2026-01-05 01:25:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:12.723280 | orchestrator | 2026-01-05 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:15.777238 | orchestrator | 2026-01-05 01:25:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:15.779155 | orchestrator | 2026-01-05 01:25:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:15.779199 | orchestrator | 2026-01-05 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:18.827723 | orchestrator | 2026-01-05 01:25:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:18.829217 | orchestrator | 2026-01-05 01:25:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:18.829284 | orchestrator | 2026-01-05 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:21.875131 | orchestrator | 2026-01-05 01:25:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:21.877391 | orchestrator | 2026-01-05 01:25:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:21.877509 | orchestrator | 2026-01-05 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:24.918718 | orchestrator | 2026-01-05 01:25:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:24.920066 | orchestrator | 2026-01-05 01:25:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:24.920102 | orchestrator | 2026-01-05 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:27.970297 | orchestrator | 2026-01-05 01:25:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:27.972649 | orchestrator | 2026-01-05 01:25:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:27.972828 | orchestrator | 2026-01-05 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:31.023239 | orchestrator | 2026-01-05 01:25:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:31.026754 | orchestrator | 2026-01-05 01:25:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:31.026841 | orchestrator | 2026-01-05 01:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:34.080063 | orchestrator | 2026-01-05 01:25:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:34.081242 | orchestrator | 2026-01-05 01:25:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:34.081283 | orchestrator | 2026-01-05 01:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:37.135868 | orchestrator | 2026-01-05 01:25:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:37.137026 | orchestrator | 2026-01-05 01:25:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:37.137081 | orchestrator | 2026-01-05 01:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:40.182568 | orchestrator | 2026-01-05 01:25:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:40.184575 | orchestrator | 2026-01-05 01:25:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:40.184642 | orchestrator | 2026-01-05 01:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:43.238830 | orchestrator | 2026-01-05 01:25:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:43.242626 | orchestrator | 2026-01-05 01:25:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:43.242731 | orchestrator | 2026-01-05 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:46.298602 | orchestrator | 2026-01-05 01:25:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:46.300568 | orchestrator | 2026-01-05 01:25:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:46.300643 | orchestrator | 2026-01-05 01:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:49.359610 | orchestrator | 2026-01-05 01:25:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:49.360955 | orchestrator | 2026-01-05 01:25:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:49.361216 | orchestrator | 2026-01-05 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:52.419942 | orchestrator | 2026-01-05 01:25:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:52.421851 | orchestrator | 2026-01-05 01:25:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:52.421914 | orchestrator | 2026-01-05 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:55.477193 | orchestrator | 2026-01-05 01:25:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:55.480303 | orchestrator | 2026-01-05 01:25:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:55.480348 | orchestrator | 2026-01-05 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:25:58.536381 | orchestrator | 2026-01-05 01:25:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:25:58.539745 | orchestrator | 2026-01-05 01:25:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:25:58.539823 | orchestrator | 2026-01-05 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:01.594152 | orchestrator | 2026-01-05 01:26:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:01.595454 | orchestrator | 2026-01-05 01:26:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:01.595600 | orchestrator | 2026-01-05 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:04.648582 | orchestrator | 2026-01-05 01:26:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:04.650248 | orchestrator | 2026-01-05 01:26:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:04.650414 | orchestrator | 2026-01-05 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:07.707885 | orchestrator | 2026-01-05 01:26:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:07.709983 | orchestrator | 2026-01-05 01:26:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:07.710127 | orchestrator | 2026-01-05 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:10.754981 | orchestrator | 2026-01-05 01:26:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:10.756879 | orchestrator | 2026-01-05 01:26:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:10.756943 | orchestrator | 2026-01-05 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:13.808002 | orchestrator | 2026-01-05 01:26:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:13.810066 | orchestrator | 2026-01-05 01:26:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:13.810134 | orchestrator | 2026-01-05 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:16.857481 | orchestrator | 2026-01-05 01:26:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:16.859007 | orchestrator | 2026-01-05 01:26:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:16.880094 | orchestrator | 2026-01-05 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:19.914866 | orchestrator | 2026-01-05 01:26:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:19.918209 | orchestrator | 2026-01-05 01:26:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:19.918318 | orchestrator | 2026-01-05 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:22.971327 | orchestrator | 2026-01-05 01:26:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:22.973158 | orchestrator | 2026-01-05 01:26:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:22.973225 | orchestrator | 2026-01-05 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:26.017760 | orchestrator | 2026-01-05 01:26:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:26.022760 | orchestrator | 2026-01-05 01:26:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:26.022875 | orchestrator | 2026-01-05 01:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:29.068490 | orchestrator | 2026-01-05 01:26:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:29.069470 | orchestrator | 2026-01-05 01:26:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:29.069519 | orchestrator | 2026-01-05 01:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:32.117857 | orchestrator | 2026-01-05 01:26:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:32.120982 | orchestrator | 2026-01-05 01:26:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:32.121054 | orchestrator | 2026-01-05 01:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:35.175826 | orchestrator | 2026-01-05 01:26:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:35.177977 | orchestrator | 2026-01-05 01:26:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:35.178062 | orchestrator | 2026-01-05 01:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:38.225207 | orchestrator | 2026-01-05 01:26:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:38.227232 | orchestrator | 2026-01-05 01:26:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:38.227318 | orchestrator | 2026-01-05 01:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:41.281197 | orchestrator | 2026-01-05 01:26:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:41.283016 | orchestrator | 2026-01-05 01:26:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:41.283061 | orchestrator | 2026-01-05 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:44.336160 | orchestrator | 2026-01-05 01:26:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:44.336382 | orchestrator | 2026-01-05 01:26:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:44.336416 | orchestrator | 2026-01-05 01:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:47.389138 | orchestrator | 2026-01-05 01:26:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:47.392083 | orchestrator | 2026-01-05 01:26:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:47.392202 | orchestrator | 2026-01-05 01:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:50.439427 | orchestrator | 2026-01-05 01:26:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:50.439564 | orchestrator | 2026-01-05 01:26:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:50.439611 | orchestrator | 2026-01-05 01:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:53.487784 | orchestrator | 2026-01-05 01:26:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:53.490065 | orchestrator | 2026-01-05 01:26:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:53.490138 | orchestrator | 2026-01-05 01:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:56.542075 | orchestrator | 2026-01-05 01:26:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:56.543913 | orchestrator | 2026-01-05 01:26:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:56.544060 | orchestrator | 2026-01-05 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:26:59.593446 | orchestrator | 2026-01-05 01:26:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:26:59.594965 | orchestrator | 2026-01-05 01:26:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:26:59.595027 | orchestrator | 2026-01-05 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:02.643528 | orchestrator | 2026-01-05 01:27:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:02.645857 | orchestrator | 2026-01-05 01:27:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:02.645927 | orchestrator | 2026-01-05 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:05.698076 | orchestrator | 2026-01-05 01:27:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:05.699804 | orchestrator | 2026-01-05 01:27:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:05.699871 | orchestrator | 2026-01-05 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:08.743082 | orchestrator | 2026-01-05 01:27:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:08.744701 | orchestrator | 2026-01-05 01:27:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:08.744844 | orchestrator | 2026-01-05 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:11.797379 | orchestrator | 2026-01-05 01:27:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:11.797724 | orchestrator | 2026-01-05 01:27:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:11.797783 | orchestrator | 2026-01-05 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:14.846736 | orchestrator | 2026-01-05 01:27:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:14.849178 | orchestrator | 2026-01-05 01:27:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:14.849325 | orchestrator | 2026-01-05 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:17.898441 | orchestrator | 2026-01-05 01:27:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:17.900383 | orchestrator | 2026-01-05 01:27:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:17.900443 | orchestrator | 2026-01-05 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:20.948436 | orchestrator | 2026-01-05 01:27:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:20.950150 | orchestrator | 2026-01-05 01:27:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:20.950237 | orchestrator | 2026-01-05 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:23.992934 | orchestrator | 2026-01-05 01:27:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:23.995050 | orchestrator | 2026-01-05 01:27:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:23.995089 | orchestrator | 2026-01-05 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:27.042171 | orchestrator | 2026-01-05 01:27:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:27.044303 | orchestrator | 2026-01-05 01:27:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:27.044385 | orchestrator | 2026-01-05 01:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:30.089586 | orchestrator | 2026-01-05 01:27:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:30.091870 | orchestrator | 2026-01-05 01:27:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:30.091954 | orchestrator | 2026-01-05 01:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:33.147530 | orchestrator | 2026-01-05 01:27:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:33.149267 | orchestrator | 2026-01-05 01:27:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:33.149355 | orchestrator | 2026-01-05 01:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:36.202695 | orchestrator | 2026-01-05 01:27:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:36.204375 | orchestrator | 2026-01-05 01:27:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:36.204418 | orchestrator | 2026-01-05 01:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:39.263900 | orchestrator | 2026-01-05 01:27:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:39.266352 | orchestrator | 2026-01-05 01:27:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:39.266532 | orchestrator | 2026-01-05 01:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:42.320446 | orchestrator | 2026-01-05 01:27:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:42.322221 | orchestrator | 2026-01-05 01:27:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:42.322265 | orchestrator | 2026-01-05 01:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:45.368970 | orchestrator | 2026-01-05 01:27:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:45.373334 | orchestrator | 2026-01-05 01:27:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:45.373426 | orchestrator | 2026-01-05 01:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:48.423005 | orchestrator | 2026-01-05 01:27:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:48.423697 | orchestrator | 2026-01-05 01:27:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:48.423862 | orchestrator | 2026-01-05 01:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:51.468108 | orchestrator | 2026-01-05 01:27:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:51.469458 | orchestrator | 2026-01-05 01:27:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:51.469537 | orchestrator | 2026-01-05 01:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:54.523455 | orchestrator | 2026-01-05 01:27:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:54.525657 | orchestrator | 2026-01-05 01:27:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:54.525731 | orchestrator | 2026-01-05 01:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:27:57.581245 | orchestrator | 2026-01-05 01:27:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:27:57.582826 | orchestrator | 2026-01-05 01:27:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:27:57.582891 | orchestrator | 2026-01-05 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:00.638003 | orchestrator | 2026-01-05 01:28:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:00.641077 | orchestrator | 2026-01-05 01:28:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:00.641219 | orchestrator | 2026-01-05 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:03.685971 | orchestrator | 2026-01-05 01:28:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:03.688482 | orchestrator | 2026-01-05 01:28:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:03.688561 | orchestrator | 2026-01-05 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:06.740576 | orchestrator | 2026-01-05 01:28:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:06.742538 | orchestrator | 2026-01-05 01:28:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:06.742636 | orchestrator | 2026-01-05 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:09.797995 | orchestrator | 2026-01-05 01:28:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:09.799651 | orchestrator | 2026-01-05 01:28:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:09.799699 | orchestrator | 2026-01-05 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:12.855676 | orchestrator | 2026-01-05 01:28:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:12.858371 | orchestrator | 2026-01-05 01:28:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:12.858438 | orchestrator | 2026-01-05 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:15.909592 | orchestrator | 2026-01-05 01:28:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:15.911397 | orchestrator | 2026-01-05 01:28:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:15.911471 | orchestrator | 2026-01-05 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:18.964930 | orchestrator | 2026-01-05 01:28:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:18.967193 | orchestrator | 2026-01-05 01:28:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:18.967355 | orchestrator | 2026-01-05 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:22.021163 | orchestrator | 2026-01-05 01:28:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:22.022987 | orchestrator | 2026-01-05 01:28:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:22.023071 | orchestrator | 2026-01-05 01:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:25.070609 | orchestrator | 2026-01-05 01:28:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:25.073084 | orchestrator | 2026-01-05 01:28:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:25.073175 | orchestrator | 2026-01-05 01:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:28.128804 | orchestrator | 2026-01-05 01:28:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:28.129237 | orchestrator | 2026-01-05 01:28:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:28.129897 | orchestrator | 2026-01-05 01:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:31.180647 | orchestrator | 2026-01-05 01:28:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:31.182542 | orchestrator | 2026-01-05 01:28:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:31.182605 | orchestrator | 2026-01-05 01:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:34.231611 | orchestrator | 2026-01-05 01:28:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:34.232990 | orchestrator | 2026-01-05 01:28:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:34.233089 | orchestrator | 2026-01-05 01:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:37.282253 | orchestrator | 2026-01-05 01:28:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:37.285050 | orchestrator | 2026-01-05 01:28:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:37.285190 | orchestrator | 2026-01-05 01:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:40.350628 | orchestrator | 2026-01-05 01:28:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:40.351868 | orchestrator | 2026-01-05 01:28:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:40.351913 | orchestrator | 2026-01-05 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:43.404611 | orchestrator | 2026-01-05 01:28:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:43.407236 | orchestrator | 2026-01-05 01:28:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:43.407354 | orchestrator | 2026-01-05 01:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:46.461960 | orchestrator | 2026-01-05 01:28:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:46.463825 | orchestrator | 2026-01-05 01:28:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:46.463863 | orchestrator | 2026-01-05 01:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:49.514369 | orchestrator | 2026-01-05 01:28:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:49.516279 | orchestrator | 2026-01-05 01:28:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:49.516334 | orchestrator | 2026-01-05 01:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:52.570760 | orchestrator | 2026-01-05 01:28:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:52.572247 | orchestrator | 2026-01-05 01:28:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:52.572566 | orchestrator | 2026-01-05 01:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:55.622967 | orchestrator | 2026-01-05 01:28:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:55.624963 | orchestrator | 2026-01-05 01:28:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:55.625027 | orchestrator | 2026-01-05 01:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:28:58.673294 | orchestrator | 2026-01-05 01:28:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:28:58.674553 | orchestrator | 2026-01-05 01:28:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:28:58.674666 | orchestrator | 2026-01-05 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:01.730574 | orchestrator | 2026-01-05 01:29:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:01.732507 | orchestrator | 2026-01-05 01:29:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:01.732558 | orchestrator | 2026-01-05 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:04.782784 | orchestrator | 2026-01-05 01:29:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:04.785605 | orchestrator | 2026-01-05 01:29:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:04.785702 | orchestrator | 2026-01-05 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:07.838005 | orchestrator | 2026-01-05 01:29:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:07.839533 | orchestrator | 2026-01-05 01:29:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:07.839674 | orchestrator | 2026-01-05 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:10.890467 | orchestrator | 2026-01-05 01:29:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:10.892103 | orchestrator | 2026-01-05 01:29:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:10.892213 | orchestrator | 2026-01-05 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:13.942144 | orchestrator | 2026-01-05 01:29:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:13.943287 | orchestrator | 2026-01-05 01:29:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:13.943326 | orchestrator | 2026-01-05 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:16.989009 | orchestrator | 2026-01-05 01:29:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:16.990765 | orchestrator | 2026-01-05 01:29:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:16.990863 | orchestrator | 2026-01-05 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:20.049746 | orchestrator | 2026-01-05 01:29:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:20.053112 | orchestrator | 2026-01-05 01:29:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:20.053242 | orchestrator | 2026-01-05 01:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:23.103087 | orchestrator | 2026-01-05 01:29:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:23.104316 | orchestrator | 2026-01-05 01:29:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:23.104427 | orchestrator | 2026-01-05 01:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:26.154848 | orchestrator | 2026-01-05 01:29:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:26.157421 | orchestrator | 2026-01-05 01:29:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:26.157469 | orchestrator | 2026-01-05 01:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:29.210387 | orchestrator | 2026-01-05 01:29:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:29.210481 | orchestrator | 2026-01-05 01:29:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:29.210528 | orchestrator | 2026-01-05 01:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:32.259040 | orchestrator | 2026-01-05 01:29:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:32.261911 | orchestrator | 2026-01-05 01:29:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:32.262149 | orchestrator | 2026-01-05 01:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:35.314527 | orchestrator | 2026-01-05 01:29:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:35.317097 | orchestrator | 2026-01-05 01:29:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:35.317160 | orchestrator | 2026-01-05 01:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:38.363863 | orchestrator | 2026-01-05 01:29:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:38.363978 | orchestrator | 2026-01-05 01:29:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:38.363992 | orchestrator | 2026-01-05 01:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:41.423123 | orchestrator | 2026-01-05 01:29:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:41.427560 | orchestrator | 2026-01-05 01:29:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:41.427718 | orchestrator | 2026-01-05 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:44.482596 | orchestrator | 2026-01-05 01:29:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:44.485376 | orchestrator | 2026-01-05 01:29:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:44.485506 | orchestrator | 2026-01-05 01:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:47.541969 | orchestrator | 2026-01-05 01:29:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:47.543683 | orchestrator | 2026-01-05 01:29:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:47.543759 | orchestrator | 2026-01-05 01:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:50.592814 | orchestrator | 2026-01-05 01:29:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:50.593733 | orchestrator | 2026-01-05 01:29:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:50.593761 | orchestrator | 2026-01-05 01:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:53.645169 | orchestrator | 2026-01-05 01:29:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:53.646369 | orchestrator | 2026-01-05 01:29:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:53.646507 | orchestrator | 2026-01-05 01:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:56.701112 | orchestrator | 2026-01-05 01:29:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:56.703358 | orchestrator | 2026-01-05 01:29:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:56.703455 | orchestrator | 2026-01-05 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:29:59.762137 | orchestrator | 2026-01-05 01:29:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:29:59.766439 | orchestrator | 2026-01-05 01:29:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:29:59.766668 | orchestrator | 2026-01-05 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:02.818551 | orchestrator | 2026-01-05 01:30:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:02.818969 | orchestrator | 2026-01-05 01:30:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:02.819003 | orchestrator | 2026-01-05 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:05.867735 | orchestrator | 2026-01-05 01:30:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:05.870153 | orchestrator | 2026-01-05 01:30:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:05.870232 | orchestrator | 2026-01-05 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:08.912606 | orchestrator | 2026-01-05 01:30:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:08.913393 | orchestrator | 2026-01-05 01:30:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:08.913451 | orchestrator | 2026-01-05 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:11.963135 | orchestrator | 2026-01-05 01:30:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:11.964731 | orchestrator | 2026-01-05 01:30:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:11.964806 | orchestrator | 2026-01-05 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:15.026050 | orchestrator | 2026-01-05 01:30:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:15.029095 | orchestrator | 2026-01-05 01:30:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:15.029159 | orchestrator | 2026-01-05 01:30:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:18.085769 | orchestrator | 2026-01-05 01:30:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:18.088431 | orchestrator | 2026-01-05 01:30:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:18.088512 | orchestrator | 2026-01-05 01:30:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:21.131946 | orchestrator | 2026-01-05 01:30:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:21.132661 | orchestrator | 2026-01-05 01:30:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:21.132959 | orchestrator | 2026-01-05 01:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:24.183557 | orchestrator | 2026-01-05 01:30:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:24.185181 | orchestrator | 2026-01-05 01:30:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:24.185790 | orchestrator | 2026-01-05 01:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:27.240735 | orchestrator | 2026-01-05 01:30:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:27.242597 | orchestrator | 2026-01-05 01:30:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:27.242650 | orchestrator | 2026-01-05 01:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:30.306670 | orchestrator | 2026-01-05 01:30:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:30.307478 | orchestrator | 2026-01-05 01:30:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:30.307511 | orchestrator | 2026-01-05 01:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:33.363409 | orchestrator | 2026-01-05 01:30:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:33.364712 | orchestrator | 2026-01-05 01:30:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:33.364763 | orchestrator | 2026-01-05 01:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:36.415062 | orchestrator | 2026-01-05 01:30:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:36.417926 | orchestrator | 2026-01-05 01:30:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:36.418105 | orchestrator | 2026-01-05 01:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:39.461506 | orchestrator | 2026-01-05 01:30:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:39.463656 | orchestrator | 2026-01-05 01:30:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:39.463717 | orchestrator | 2026-01-05 01:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:42.516729 | orchestrator | 2026-01-05 01:30:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:42.516839 | orchestrator | 2026-01-05 01:30:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:42.516848 | orchestrator | 2026-01-05 01:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:45.565825 | orchestrator | 2026-01-05 01:30:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:45.567109 | orchestrator | 2026-01-05 01:30:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:45.567183 | orchestrator | 2026-01-05 01:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:48.617789 | orchestrator | 2026-01-05 01:30:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:48.619976 | orchestrator | 2026-01-05 01:30:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:48.620167 | orchestrator | 2026-01-05 01:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:51.677335 | orchestrator | 2026-01-05 01:30:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:51.678888 | orchestrator | 2026-01-05 01:30:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:51.678987 | orchestrator | 2026-01-05 01:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:54.734482 | orchestrator | 2026-01-05 01:30:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:54.736674 | orchestrator | 2026-01-05 01:30:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:54.736737 | orchestrator | 2026-01-05 01:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:30:57.787761 | orchestrator | 2026-01-05 01:30:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:30:57.789908 | orchestrator | 2026-01-05 01:30:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:30:57.790076 | orchestrator | 2026-01-05 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:00.846066 | orchestrator | 2026-01-05 01:31:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:00.847848 | orchestrator | 2026-01-05 01:31:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:00.848025 | orchestrator | 2026-01-05 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:03.899999 | orchestrator | 2026-01-05 01:31:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:03.902298 | orchestrator | 2026-01-05 01:31:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:03.902356 | orchestrator | 2026-01-05 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:06.956391 | orchestrator | 2026-01-05 01:31:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:06.958991 | orchestrator | 2026-01-05 01:31:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:06.959233 | orchestrator | 2026-01-05 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:10.009387 | orchestrator | 2026-01-05 01:31:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:10.011305 | orchestrator | 2026-01-05 01:31:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:10.011513 | orchestrator | 2026-01-05 01:31:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:13.058290 | orchestrator | 2026-01-05 01:31:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:13.060388 | orchestrator | 2026-01-05 01:31:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:13.060483 | orchestrator | 2026-01-05 01:31:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:16.117955 | orchestrator | 2026-01-05 01:31:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:16.119869 | orchestrator | 2026-01-05 01:31:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:16.120106 | orchestrator | 2026-01-05 01:31:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:19.179810 | orchestrator | 2026-01-05 01:31:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:19.180601 | orchestrator | 2026-01-05 01:31:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:19.180644 | orchestrator | 2026-01-05 01:31:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:22.231391 | orchestrator | 2026-01-05 01:31:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:22.231575 | orchestrator | 2026-01-05 01:31:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:22.231591 | orchestrator | 2026-01-05 01:31:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:25.278912 | orchestrator | 2026-01-05 01:31:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:25.281649 | orchestrator | 2026-01-05 01:31:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:25.281722 | orchestrator | 2026-01-05 01:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:28.338636 | orchestrator | 2026-01-05 01:31:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:28.340540 | orchestrator | 2026-01-05 01:31:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:28.340611 | orchestrator | 2026-01-05 01:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:31.392583 | orchestrator | 2026-01-05 01:31:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:31.394151 | orchestrator | 2026-01-05 01:31:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:31.394216 | orchestrator | 2026-01-05 01:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:34.448836 | orchestrator | 2026-01-05 01:31:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:34.450216 | orchestrator | 2026-01-05 01:31:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:34.450342 | orchestrator | 2026-01-05 01:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:37.505269 | orchestrator | 2026-01-05 01:31:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:37.507395 | orchestrator | 2026-01-05 01:31:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:37.507457 | orchestrator | 2026-01-05 01:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:40.558103 | orchestrator | 2026-01-05 01:31:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:40.560655 | orchestrator | 2026-01-05 01:31:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:40.560733 | orchestrator | 2026-01-05 01:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:43.612748 | orchestrator | 2026-01-05 01:31:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:43.613929 | orchestrator | 2026-01-05 01:31:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:43.614107 | orchestrator | 2026-01-05 01:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:46.659401 | orchestrator | 2026-01-05 01:31:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:46.661232 | orchestrator | 2026-01-05 01:31:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:46.661315 | orchestrator | 2026-01-05 01:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:49.711051 | orchestrator | 2026-01-05 01:31:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:49.712707 | orchestrator | 2026-01-05 01:31:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:49.712779 | orchestrator | 2026-01-05 01:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:52.770472 | orchestrator | 2026-01-05 01:31:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:52.772189 | orchestrator | 2026-01-05 01:31:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:52.772220 | orchestrator | 2026-01-05 01:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:55.822973 | orchestrator | 2026-01-05 01:31:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:55.824137 | orchestrator | 2026-01-05 01:31:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:55.824570 | orchestrator | 2026-01-05 01:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:31:58.877385 | orchestrator | 2026-01-05 01:31:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:31:58.878096 | orchestrator | 2026-01-05 01:31:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:31:58.878305 | orchestrator | 2026-01-05 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:01.929896 | orchestrator | 2026-01-05 01:32:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:01.933391 | orchestrator | 2026-01-05 01:32:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:01.933443 | orchestrator | 2026-01-05 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:04.985350 | orchestrator | 2026-01-05 01:32:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:04.987661 | orchestrator | 2026-01-05 01:32:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:04.987700 | orchestrator | 2026-01-05 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:08.041316 | orchestrator | 2026-01-05 01:32:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:08.044604 | orchestrator | 2026-01-05 01:32:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:08.044708 | orchestrator | 2026-01-05 01:32:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:11.088904 | orchestrator | 2026-01-05 01:32:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:11.090422 | orchestrator | 2026-01-05 01:32:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:11.217750 | orchestrator | 2026-01-05 01:32:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:14.141517 | orchestrator | 2026-01-05 01:32:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:14.143090 | orchestrator | 2026-01-05 01:32:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:14.143176 | orchestrator | 2026-01-05 01:32:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:17.193644 | orchestrator | 2026-01-05 01:32:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:17.195890 | orchestrator | 2026-01-05 01:32:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:17.195965 | orchestrator | 2026-01-05 01:32:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:20.245074 | orchestrator | 2026-01-05 01:32:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:20.246747 | orchestrator | 2026-01-05 01:32:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:20.246811 | orchestrator | 2026-01-05 01:32:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:23.303525 | orchestrator | 2026-01-05 01:32:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:23.306772 | orchestrator | 2026-01-05 01:32:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:23.306844 | orchestrator | 2026-01-05 01:32:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:26.355799 | orchestrator | 2026-01-05 01:32:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:26.357594 | orchestrator | 2026-01-05 01:32:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:26.357654 | orchestrator | 2026-01-05 01:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:29.406435 | orchestrator | 2026-01-05 01:32:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:29.410253 | orchestrator | 2026-01-05 01:32:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:29.410410 | orchestrator | 2026-01-05 01:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:32.461111 | orchestrator | 2026-01-05 01:32:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:32.463342 | orchestrator | 2026-01-05 01:32:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:32.463412 | orchestrator | 2026-01-05 01:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:35.511965 | orchestrator | 2026-01-05 01:32:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:35.514964 | orchestrator | 2026-01-05 01:32:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:35.515014 | orchestrator | 2026-01-05 01:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:38.563833 | orchestrator | 2026-01-05 01:32:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:38.565228 | orchestrator | 2026-01-05 01:32:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:38.565305 | orchestrator | 2026-01-05 01:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:41.608926 | orchestrator | 2026-01-05 01:32:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:41.610326 | orchestrator | 2026-01-05 01:32:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:41.610381 | orchestrator | 2026-01-05 01:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:44.665951 | orchestrator | 2026-01-05 01:32:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:44.667894 | orchestrator | 2026-01-05 01:32:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:44.668294 | orchestrator | 2026-01-05 01:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:47.720689 | orchestrator | 2026-01-05 01:32:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:47.723382 | orchestrator | 2026-01-05 01:32:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:47.723502 | orchestrator | 2026-01-05 01:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:50.777671 | orchestrator | 2026-01-05 01:32:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:50.779551 | orchestrator | 2026-01-05 01:32:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:50.779673 | orchestrator | 2026-01-05 01:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:53.831339 | orchestrator | 2026-01-05 01:32:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:53.832604 | orchestrator | 2026-01-05 01:32:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:53.832677 | orchestrator | 2026-01-05 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:56.879498 | orchestrator | 2026-01-05 01:32:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:56.880732 | orchestrator | 2026-01-05 01:32:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:56.880820 | orchestrator | 2026-01-05 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:32:59.938319 | orchestrator | 2026-01-05 01:32:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:32:59.940750 | orchestrator | 2026-01-05 01:32:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:32:59.940831 | orchestrator | 2026-01-05 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:02.995690 | orchestrator | 2026-01-05 01:33:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:02.996870 | orchestrator | 2026-01-05 01:33:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:02.996896 | orchestrator | 2026-01-05 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:06.050263 | orchestrator | 2026-01-05 01:33:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:06.051201 | orchestrator | 2026-01-05 01:33:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:06.051289 | orchestrator | 2026-01-05 01:33:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:09.104873 | orchestrator | 2026-01-05 01:33:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:09.106275 | orchestrator | 2026-01-05 01:33:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:09.106316 | orchestrator | 2026-01-05 01:33:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:12.162914 | orchestrator | 2026-01-05 01:33:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:12.164974 | orchestrator | 2026-01-05 01:33:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:12.165052 | orchestrator | 2026-01-05 01:33:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:15.218315 | orchestrator | 2026-01-05 01:33:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:15.220339 | orchestrator | 2026-01-05 01:33:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:15.220455 | orchestrator | 2026-01-05 01:33:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:18.276074 | orchestrator | 2026-01-05 01:33:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:18.277720 | orchestrator | 2026-01-05 01:33:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:18.277798 | orchestrator | 2026-01-05 01:33:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:21.328639 | orchestrator | 2026-01-05 01:33:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:21.331505 | orchestrator | 2026-01-05 01:33:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:21.331619 | orchestrator | 2026-01-05 01:33:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:24.383296 | orchestrator | 2026-01-05 01:33:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:24.384280 | orchestrator | 2026-01-05 01:33:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:24.384317 | orchestrator | 2026-01-05 01:33:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:27.435373 | orchestrator | 2026-01-05 01:33:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:27.438218 | orchestrator | 2026-01-05 01:33:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:27.438298 | orchestrator | 2026-01-05 01:33:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:30.490876 | orchestrator | 2026-01-05 01:33:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:30.492098 | orchestrator | 2026-01-05 01:33:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:30.492133 | orchestrator | 2026-01-05 01:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:33.547938 | orchestrator | 2026-01-05 01:33:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:33.551255 | orchestrator | 2026-01-05 01:33:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:33.551320 | orchestrator | 2026-01-05 01:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:36.601389 | orchestrator | 2026-01-05 01:33:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:36.603616 | orchestrator | 2026-01-05 01:33:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:36.603672 | orchestrator | 2026-01-05 01:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:39.659363 | orchestrator | 2026-01-05 01:33:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:39.663869 | orchestrator | 2026-01-05 01:33:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:39.663963 | orchestrator | 2026-01-05 01:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:42.713106 | orchestrator | 2026-01-05 01:33:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:42.716019 | orchestrator | 2026-01-05 01:33:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:42.716089 | orchestrator | 2026-01-05 01:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:45.767586 | orchestrator | 2026-01-05 01:33:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:45.769356 | orchestrator | 2026-01-05 01:33:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:45.769887 | orchestrator | 2026-01-05 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:48.843529 | orchestrator | 2026-01-05 01:33:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:48.844996 | orchestrator | 2026-01-05 01:33:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:48.845077 | orchestrator | 2026-01-05 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:51.895047 | orchestrator | 2026-01-05 01:33:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:51.897321 | orchestrator | 2026-01-05 01:33:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:51.897393 | orchestrator | 2026-01-05 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:54.948487 | orchestrator | 2026-01-05 01:33:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:54.948610 | orchestrator | 2026-01-05 01:33:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:54.948625 | orchestrator | 2026-01-05 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:33:57.995495 | orchestrator | 2026-01-05 01:33:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:33:57.997561 | orchestrator | 2026-01-05 01:33:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:33:57.997631 | orchestrator | 2026-01-05 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:01.059868 | orchestrator | 2026-01-05 01:34:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:01.061249 | orchestrator | 2026-01-05 01:34:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:01.061326 | orchestrator | 2026-01-05 01:34:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:04.110116 | orchestrator | 2026-01-05 01:34:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:04.111342 | orchestrator | 2026-01-05 01:34:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:04.111392 | orchestrator | 2026-01-05 01:34:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:07.163175 | orchestrator | 2026-01-05 01:34:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:07.165708 | orchestrator | 2026-01-05 01:34:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:07.165826 | orchestrator | 2026-01-05 01:34:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:10.214108 | orchestrator | 2026-01-05 01:34:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:10.215105 | orchestrator | 2026-01-05 01:34:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:10.215451 | orchestrator | 2026-01-05 01:34:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:13.263972 | orchestrator | 2026-01-05 01:34:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:13.266343 | orchestrator | 2026-01-05 01:34:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:13.266418 | orchestrator | 2026-01-05 01:34:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:16.305623 | orchestrator | 2026-01-05 01:34:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:16.307880 | orchestrator | 2026-01-05 01:34:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:16.307933 | orchestrator | 2026-01-05 01:34:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:19.363412 | orchestrator | 2026-01-05 01:34:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:19.366135 | orchestrator | 2026-01-05 01:34:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:19.366195 | orchestrator | 2026-01-05 01:34:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:22.416469 | orchestrator | 2026-01-05 01:34:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:22.419051 | orchestrator | 2026-01-05 01:34:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:22.419107 | orchestrator | 2026-01-05 01:34:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:25.466361 | orchestrator | 2026-01-05 01:34:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:25.466568 | orchestrator | 2026-01-05 01:34:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:25.466583 | orchestrator | 2026-01-05 01:34:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:28.514730 | orchestrator | 2026-01-05 01:34:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:28.515990 | orchestrator | 2026-01-05 01:34:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:28.516033 | orchestrator | 2026-01-05 01:34:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:31.556966 | orchestrator | 2026-01-05 01:34:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:31.558653 | orchestrator | 2026-01-05 01:34:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:31.558782 | orchestrator | 2026-01-05 01:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:34.603587 | orchestrator | 2026-01-05 01:34:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:34.605948 | orchestrator | 2026-01-05 01:34:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:34.606003 | orchestrator | 2026-01-05 01:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:37.658276 | orchestrator | 2026-01-05 01:34:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:37.660797 | orchestrator | 2026-01-05 01:34:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:37.660847 | orchestrator | 2026-01-05 01:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:40.707678 | orchestrator | 2026-01-05 01:34:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:40.708809 | orchestrator | 2026-01-05 01:34:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:40.708853 | orchestrator | 2026-01-05 01:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:43.764506 | orchestrator | 2026-01-05 01:34:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:43.767151 | orchestrator | 2026-01-05 01:34:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:43.767259 | orchestrator | 2026-01-05 01:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:46.818158 | orchestrator | 2026-01-05 01:34:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:46.820965 | orchestrator | 2026-01-05 01:34:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:46.821039 | orchestrator | 2026-01-05 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:49.864969 | orchestrator | 2026-01-05 01:34:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:49.866602 | orchestrator | 2026-01-05 01:34:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:49.866751 | orchestrator | 2026-01-05 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:52.924169 | orchestrator | 2026-01-05 01:34:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:52.925498 | orchestrator | 2026-01-05 01:34:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:52.925522 | orchestrator | 2026-01-05 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:55.980696 | orchestrator | 2026-01-05 01:34:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:55.981135 | orchestrator | 2026-01-05 01:34:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:55.981159 | orchestrator | 2026-01-05 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:34:59.032017 | orchestrator | 2026-01-05 01:34:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:34:59.038457 | orchestrator | 2026-01-05 01:34:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:34:59.038535 | orchestrator | 2026-01-05 01:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:02.087389 | orchestrator | 2026-01-05 01:35:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:02.088321 | orchestrator | 2026-01-05 01:35:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:02.088363 | orchestrator | 2026-01-05 01:35:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:05.129591 | orchestrator | 2026-01-05 01:35:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:05.131889 | orchestrator | 2026-01-05 01:35:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:05.132018 | orchestrator | 2026-01-05 01:35:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:08.174187 | orchestrator | 2026-01-05 01:35:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:08.174797 | orchestrator | 2026-01-05 01:35:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:08.175026 | orchestrator | 2026-01-05 01:35:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:11.224664 | orchestrator | 2026-01-05 01:35:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:11.227029 | orchestrator | 2026-01-05 01:35:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:11.227100 | orchestrator | 2026-01-05 01:35:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:14.287540 | orchestrator | 2026-01-05 01:35:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:14.287865 | orchestrator | 2026-01-05 01:35:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:14.287895 | orchestrator | 2026-01-05 01:35:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:17.337260 | orchestrator | 2026-01-05 01:35:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:17.341983 | orchestrator | 2026-01-05 01:35:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:17.342121 | orchestrator | 2026-01-05 01:35:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:20.391339 | orchestrator | 2026-01-05 01:35:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:20.393779 | orchestrator | 2026-01-05 01:35:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:20.393841 | orchestrator | 2026-01-05 01:35:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:23.442009 | orchestrator | 2026-01-05 01:35:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:23.443861 | orchestrator | 2026-01-05 01:35:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:23.443907 | orchestrator | 2026-01-05 01:35:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:26.496929 | orchestrator | 2026-01-05 01:35:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:26.500552 | orchestrator | 2026-01-05 01:35:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:26.500688 | orchestrator | 2026-01-05 01:35:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:29.552274 | orchestrator | 2026-01-05 01:35:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:29.552631 | orchestrator | 2026-01-05 01:35:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:29.552662 | orchestrator | 2026-01-05 01:35:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:32.596184 | orchestrator | 2026-01-05 01:35:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:32.597524 | orchestrator | 2026-01-05 01:35:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:32.597607 | orchestrator | 2026-01-05 01:35:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:35.642820 | orchestrator | 2026-01-05 01:35:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:35.644287 | orchestrator | 2026-01-05 01:35:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:35.644393 | orchestrator | 2026-01-05 01:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:38.693168 | orchestrator | 2026-01-05 01:35:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:38.693293 | orchestrator | 2026-01-05 01:35:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:38.693308 | orchestrator | 2026-01-05 01:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:41.743986 | orchestrator | 2026-01-05 01:35:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:41.746877 | orchestrator | 2026-01-05 01:35:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:41.746954 | orchestrator | 2026-01-05 01:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:44.799520 | orchestrator | 2026-01-05 01:35:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:44.803068 | orchestrator | 2026-01-05 01:35:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:44.803151 | orchestrator | 2026-01-05 01:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:47.852930 | orchestrator | 2026-01-05 01:35:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:47.854189 | orchestrator | 2026-01-05 01:35:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:47.854379 | orchestrator | 2026-01-05 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:50.899335 | orchestrator | 2026-01-05 01:35:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:50.900142 | orchestrator | 2026-01-05 01:35:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:50.900228 | orchestrator | 2026-01-05 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:53.950235 | orchestrator | 2026-01-05 01:35:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:53.951639 | orchestrator | 2026-01-05 01:35:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:53.951704 | orchestrator | 2026-01-05 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:35:57.010872 | orchestrator | 2026-01-05 01:35:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:35:57.012252 | orchestrator | 2026-01-05 01:35:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:35:57.012368 | orchestrator | 2026-01-05 01:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:00.072415 | orchestrator | 2026-01-05 01:36:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:00.073548 | orchestrator | 2026-01-05 01:36:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:00.073597 | orchestrator | 2026-01-05 01:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:03.126288 | orchestrator | 2026-01-05 01:36:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:03.127097 | orchestrator | 2026-01-05 01:36:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:03.127140 | orchestrator | 2026-01-05 01:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:06.180275 | orchestrator | 2026-01-05 01:36:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:06.182140 | orchestrator | 2026-01-05 01:36:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:06.182431 | orchestrator | 2026-01-05 01:36:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:09.229901 | orchestrator | 2026-01-05 01:36:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:09.233417 | orchestrator | 2026-01-05 01:36:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:09.233591 | orchestrator | 2026-01-05 01:36:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:12.274897 | orchestrator | 2026-01-05 01:36:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:12.275248 | orchestrator | 2026-01-05 01:36:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:12.275281 | orchestrator | 2026-01-05 01:36:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:15.318233 | orchestrator | 2026-01-05 01:36:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:15.320347 | orchestrator | 2026-01-05 01:36:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:15.320407 | orchestrator | 2026-01-05 01:36:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:18.371210 | orchestrator | 2026-01-05 01:36:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:18.372420 | orchestrator | 2026-01-05 01:36:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:18.372509 | orchestrator | 2026-01-05 01:36:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:21.428099 | orchestrator | 2026-01-05 01:36:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:21.429703 | orchestrator | 2026-01-05 01:36:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:21.429770 | orchestrator | 2026-01-05 01:36:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:24.483222 | orchestrator | 2026-01-05 01:36:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:24.486001 | orchestrator | 2026-01-05 01:36:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:24.486126 | orchestrator | 2026-01-05 01:36:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:27.540646 | orchestrator | 2026-01-05 01:36:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:27.542735 | orchestrator | 2026-01-05 01:36:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:27.542794 | orchestrator | 2026-01-05 01:36:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:30.594958 | orchestrator | 2026-01-05 01:36:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:30.595564 | orchestrator | 2026-01-05 01:36:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:30.595596 | orchestrator | 2026-01-05 01:36:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:33.649491 | orchestrator | 2026-01-05 01:36:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:33.651955 | orchestrator | 2026-01-05 01:36:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:33.652039 | orchestrator | 2026-01-05 01:36:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:36.704566 | orchestrator | 2026-01-05 01:36:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:36.707243 | orchestrator | 2026-01-05 01:36:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:36.707341 | orchestrator | 2026-01-05 01:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:39.758575 | orchestrator | 2026-01-05 01:36:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:39.762172 | orchestrator | 2026-01-05 01:36:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:39.762241 | orchestrator | 2026-01-05 01:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:42.818598 | orchestrator | 2026-01-05 01:36:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:42.820014 | orchestrator | 2026-01-05 01:36:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:42.820082 | orchestrator | 2026-01-05 01:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:45.876149 | orchestrator | 2026-01-05 01:36:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:45.877203 | orchestrator | 2026-01-05 01:36:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:45.877233 | orchestrator | 2026-01-05 01:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:48.930524 | orchestrator | 2026-01-05 01:36:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:48.933195 | orchestrator | 2026-01-05 01:36:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:48.933362 | orchestrator | 2026-01-05 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:51.990929 | orchestrator | 2026-01-05 01:36:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:51.992833 | orchestrator | 2026-01-05 01:36:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:51.992902 | orchestrator | 2026-01-05 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:55.042588 | orchestrator | 2026-01-05 01:36:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:55.047720 | orchestrator | 2026-01-05 01:36:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:55.047784 | orchestrator | 2026-01-05 01:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:36:58.098546 | orchestrator | 2026-01-05 01:36:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:36:58.102061 | orchestrator | 2026-01-05 01:36:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:36:58.102130 | orchestrator | 2026-01-05 01:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:01.161390 | orchestrator | 2026-01-05 01:37:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:01.164619 | orchestrator | 2026-01-05 01:37:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:01.164699 | orchestrator | 2026-01-05 01:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:04.215295 | orchestrator | 2026-01-05 01:37:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:04.216472 | orchestrator | 2026-01-05 01:37:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:04.216640 | orchestrator | 2026-01-05 01:37:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:07.277229 | orchestrator | 2026-01-05 01:37:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:07.279667 | orchestrator | 2026-01-05 01:37:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:07.279723 | orchestrator | 2026-01-05 01:37:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:10.329714 | orchestrator | 2026-01-05 01:37:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:10.330391 | orchestrator | 2026-01-05 01:37:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:10.330882 | orchestrator | 2026-01-05 01:37:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:13.379689 | orchestrator | 2026-01-05 01:37:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:13.383361 | orchestrator | 2026-01-05 01:37:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:13.383904 | orchestrator | 2026-01-05 01:37:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:16.428225 | orchestrator | 2026-01-05 01:37:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:16.428989 | orchestrator | 2026-01-05 01:37:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:16.429028 | orchestrator | 2026-01-05 01:37:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:19.479340 | orchestrator | 2026-01-05 01:37:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:19.481025 | orchestrator | 2026-01-05 01:37:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:19.481306 | orchestrator | 2026-01-05 01:37:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:22.533239 | orchestrator | 2026-01-05 01:37:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:22.535126 | orchestrator | 2026-01-05 01:37:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:22.535170 | orchestrator | 2026-01-05 01:37:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:25.580229 | orchestrator | 2026-01-05 01:37:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:25.580496 | orchestrator | 2026-01-05 01:37:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:25.580519 | orchestrator | 2026-01-05 01:37:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:28.624035 | orchestrator | 2026-01-05 01:37:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:28.625444 | orchestrator | 2026-01-05 01:37:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:28.625788 | orchestrator | 2026-01-05 01:37:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:31.676696 | orchestrator | 2026-01-05 01:37:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:31.678350 | orchestrator | 2026-01-05 01:37:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:31.678408 | orchestrator | 2026-01-05 01:37:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:34.734306 | orchestrator | 2026-01-05 01:37:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:34.737151 | orchestrator | 2026-01-05 01:37:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:34.737237 | orchestrator | 2026-01-05 01:37:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:37.787451 | orchestrator | 2026-01-05 01:37:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:37.789061 | orchestrator | 2026-01-05 01:37:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:37.789178 | orchestrator | 2026-01-05 01:37:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:40.838668 | orchestrator | 2026-01-05 01:37:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:40.840909 | orchestrator | 2026-01-05 01:37:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:40.841018 | orchestrator | 2026-01-05 01:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:43.887336 | orchestrator | 2026-01-05 01:37:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:43.888593 | orchestrator | 2026-01-05 01:37:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:43.888680 | orchestrator | 2026-01-05 01:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:46.936414 | orchestrator | 2026-01-05 01:37:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:46.936520 | orchestrator | 2026-01-05 01:37:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:46.936530 | orchestrator | 2026-01-05 01:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:49.989092 | orchestrator | 2026-01-05 01:37:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:49.991337 | orchestrator | 2026-01-05 01:37:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:49.991464 | orchestrator | 2026-01-05 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:53.040121 | orchestrator | 2026-01-05 01:37:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:53.040786 | orchestrator | 2026-01-05 01:37:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:53.040816 | orchestrator | 2026-01-05 01:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:56.088007 | orchestrator | 2026-01-05 01:37:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:56.089906 | orchestrator | 2026-01-05 01:37:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:56.089955 | orchestrator | 2026-01-05 01:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:37:59.141829 | orchestrator | 2026-01-05 01:37:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:37:59.142546 | orchestrator | 2026-01-05 01:37:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:37:59.142763 | orchestrator | 2026-01-05 01:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:02.193694 | orchestrator | 2026-01-05 01:38:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:02.196029 | orchestrator | 2026-01-05 01:38:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:02.196291 | orchestrator | 2026-01-05 01:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:05.250377 | orchestrator | 2026-01-05 01:38:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:05.252606 | orchestrator | 2026-01-05 01:38:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:05.252722 | orchestrator | 2026-01-05 01:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:08.315700 | orchestrator | 2026-01-05 01:38:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:08.317278 | orchestrator | 2026-01-05 01:38:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:08.317369 | orchestrator | 2026-01-05 01:38:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:11.371714 | orchestrator | 2026-01-05 01:38:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:11.375000 | orchestrator | 2026-01-05 01:38:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:11.375119 | orchestrator | 2026-01-05 01:38:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:14.428238 | orchestrator | 2026-01-05 01:38:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:14.429920 | orchestrator | 2026-01-05 01:38:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:14.430225 | orchestrator | 2026-01-05 01:38:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:17.484078 | orchestrator | 2026-01-05 01:38:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:17.486259 | orchestrator | 2026-01-05 01:38:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:17.486326 | orchestrator | 2026-01-05 01:38:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:20.538295 | orchestrator | 2026-01-05 01:38:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:20.540282 | orchestrator | 2026-01-05 01:38:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:20.540331 | orchestrator | 2026-01-05 01:38:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:23.594226 | orchestrator | 2026-01-05 01:38:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:23.596021 | orchestrator | 2026-01-05 01:38:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:23.596075 | orchestrator | 2026-01-05 01:38:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:26.644329 | orchestrator | 2026-01-05 01:38:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:26.644737 | orchestrator | 2026-01-05 01:38:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:26.644772 | orchestrator | 2026-01-05 01:38:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:29.697734 | orchestrator | 2026-01-05 01:38:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:29.698666 | orchestrator | 2026-01-05 01:38:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:29.698702 | orchestrator | 2026-01-05 01:38:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:32.754331 | orchestrator | 2026-01-05 01:38:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:32.756196 | orchestrator | 2026-01-05 01:38:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:32.756224 | orchestrator | 2026-01-05 01:38:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:35.807374 | orchestrator | 2026-01-05 01:38:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:35.809037 | orchestrator | 2026-01-05 01:38:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:35.809078 | orchestrator | 2026-01-05 01:38:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:38.861410 | orchestrator | 2026-01-05 01:38:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:38.863418 | orchestrator | 2026-01-05 01:38:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:38.863516 | orchestrator | 2026-01-05 01:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:41.910932 | orchestrator | 2026-01-05 01:38:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:41.913089 | orchestrator | 2026-01-05 01:38:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:41.913137 | orchestrator | 2026-01-05 01:38:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:44.960942 | orchestrator | 2026-01-05 01:38:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:44.963082 | orchestrator | 2026-01-05 01:38:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:44.963149 | orchestrator | 2026-01-05 01:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:48.017038 | orchestrator | 2026-01-05 01:38:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:48.018894 | orchestrator | 2026-01-05 01:38:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:48.018992 | orchestrator | 2026-01-05 01:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:51.072004 | orchestrator | 2026-01-05 01:38:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:51.074988 | orchestrator | 2026-01-05 01:38:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:51.075056 | orchestrator | 2026-01-05 01:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:54.123186 | orchestrator | 2026-01-05 01:38:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:54.125694 | orchestrator | 2026-01-05 01:38:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:54.126370 | orchestrator | 2026-01-05 01:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:38:57.172539 | orchestrator | 2026-01-05 01:38:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:38:57.172651 | orchestrator | 2026-01-05 01:38:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:38:57.172727 | orchestrator | 2026-01-05 01:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:00.228513 | orchestrator | 2026-01-05 01:39:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:00.229516 | orchestrator | 2026-01-05 01:39:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:00.229679 | orchestrator | 2026-01-05 01:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:03.278496 | orchestrator | 2026-01-05 01:39:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:03.280606 | orchestrator | 2026-01-05 01:39:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:03.280673 | orchestrator | 2026-01-05 01:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:06.327819 | orchestrator | 2026-01-05 01:39:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:06.329159 | orchestrator | 2026-01-05 01:39:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:06.329219 | orchestrator | 2026-01-05 01:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:09.381862 | orchestrator | 2026-01-05 01:39:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:09.383448 | orchestrator | 2026-01-05 01:39:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:09.383528 | orchestrator | 2026-01-05 01:39:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:12.437459 | orchestrator | 2026-01-05 01:39:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:12.439221 | orchestrator | 2026-01-05 01:39:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:12.439265 | orchestrator | 2026-01-05 01:39:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:15.486936 | orchestrator | 2026-01-05 01:39:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:15.488370 | orchestrator | 2026-01-05 01:39:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:15.488490 | orchestrator | 2026-01-05 01:39:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:18.536562 | orchestrator | 2026-01-05 01:39:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:18.539090 | orchestrator | 2026-01-05 01:39:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:18.539257 | orchestrator | 2026-01-05 01:39:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:21.581303 | orchestrator | 2026-01-05 01:39:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:21.583270 | orchestrator | 2026-01-05 01:39:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:21.583343 | orchestrator | 2026-01-05 01:39:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:24.638565 | orchestrator | 2026-01-05 01:39:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:24.640060 | orchestrator | 2026-01-05 01:39:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:24.640328 | orchestrator | 2026-01-05 01:39:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:27.692475 | orchestrator | 2026-01-05 01:39:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:27.694505 | orchestrator | 2026-01-05 01:39:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:27.694575 | orchestrator | 2026-01-05 01:39:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:30.748339 | orchestrator | 2026-01-05 01:39:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:30.749431 | orchestrator | 2026-01-05 01:39:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:30.749464 | orchestrator | 2026-01-05 01:39:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:33.795636 | orchestrator | 2026-01-05 01:39:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:33.797610 | orchestrator | 2026-01-05 01:39:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:33.797639 | orchestrator | 2026-01-05 01:39:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:36.851572 | orchestrator | 2026-01-05 01:39:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:36.853148 | orchestrator | 2026-01-05 01:39:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:36.853182 | orchestrator | 2026-01-05 01:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:39.912811 | orchestrator | 2026-01-05 01:39:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:39.915120 | orchestrator | 2026-01-05 01:39:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:39.915552 | orchestrator | 2026-01-05 01:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:42.971664 | orchestrator | 2026-01-05 01:39:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:42.975893 | orchestrator | 2026-01-05 01:39:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:42.975968 | orchestrator | 2026-01-05 01:39:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:46.026580 | orchestrator | 2026-01-05 01:39:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:46.028117 | orchestrator | 2026-01-05 01:39:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:46.028197 | orchestrator | 2026-01-05 01:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:49.091035 | orchestrator | 2026-01-05 01:39:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:49.092086 | orchestrator | 2026-01-05 01:39:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:49.092148 | orchestrator | 2026-01-05 01:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:52.147024 | orchestrator | 2026-01-05 01:39:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:52.147952 | orchestrator | 2026-01-05 01:39:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:52.148061 | orchestrator | 2026-01-05 01:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:55.198010 | orchestrator | 2026-01-05 01:39:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:55.198612 | orchestrator | 2026-01-05 01:39:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:55.198639 | orchestrator | 2026-01-05 01:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:39:58.255563 | orchestrator | 2026-01-05 01:39:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:39:58.257181 | orchestrator | 2026-01-05 01:39:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:39:58.257317 | orchestrator | 2026-01-05 01:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:01.303442 | orchestrator | 2026-01-05 01:40:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:01.304555 | orchestrator | 2026-01-05 01:40:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:01.304588 | orchestrator | 2026-01-05 01:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:04.360938 | orchestrator | 2026-01-05 01:40:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:04.363862 | orchestrator | 2026-01-05 01:40:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:04.363940 | orchestrator | 2026-01-05 01:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:07.416730 | orchestrator | 2026-01-05 01:40:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:07.418114 | orchestrator | 2026-01-05 01:40:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:07.418149 | orchestrator | 2026-01-05 01:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:10.464969 | orchestrator | 2026-01-05 01:40:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:10.465197 | orchestrator | 2026-01-05 01:40:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:10.465216 | orchestrator | 2026-01-05 01:40:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:13.517068 | orchestrator | 2026-01-05 01:40:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:13.518990 | orchestrator | 2026-01-05 01:40:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:13.519093 | orchestrator | 2026-01-05 01:40:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:16.566688 | orchestrator | 2026-01-05 01:40:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:16.568748 | orchestrator | 2026-01-05 01:40:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:16.568790 | orchestrator | 2026-01-05 01:40:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:19.637004 | orchestrator | 2026-01-05 01:40:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:19.638850 | orchestrator | 2026-01-05 01:40:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:19.638931 | orchestrator | 2026-01-05 01:40:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:22.729331 | orchestrator | 2026-01-05 01:40:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:22.730856 | orchestrator | 2026-01-05 01:40:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:22.732002 | orchestrator | 2026-01-05 01:40:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:25.797509 | orchestrator | 2026-01-05 01:40:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:25.799923 | orchestrator | 2026-01-05 01:40:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:25.801155 | orchestrator | 2026-01-05 01:40:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:28.850502 | orchestrator | 2026-01-05 01:40:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:28.852381 | orchestrator | 2026-01-05 01:40:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:28.852443 | orchestrator | 2026-01-05 01:40:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:31.901837 | orchestrator | 2026-01-05 01:40:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:31.903727 | orchestrator | 2026-01-05 01:40:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:31.903789 | orchestrator | 2026-01-05 01:40:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:34.956699 | orchestrator | 2026-01-05 01:40:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:34.958710 | orchestrator | 2026-01-05 01:40:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:34.958872 | orchestrator | 2026-01-05 01:40:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:38.012205 | orchestrator | 2026-01-05 01:40:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:38.014288 | orchestrator | 2026-01-05 01:40:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:38.014522 | orchestrator | 2026-01-05 01:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:41.072339 | orchestrator | 2026-01-05 01:40:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:41.074259 | orchestrator | 2026-01-05 01:40:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:41.074372 | orchestrator | 2026-01-05 01:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:44.123821 | orchestrator | 2026-01-05 01:40:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:44.125815 | orchestrator | 2026-01-05 01:40:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:44.126680 | orchestrator | 2026-01-05 01:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:47.181896 | orchestrator | 2026-01-05 01:40:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:47.184547 | orchestrator | 2026-01-05 01:40:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:47.184598 | orchestrator | 2026-01-05 01:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:50.229030 | orchestrator | 2026-01-05 01:40:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:50.231451 | orchestrator | 2026-01-05 01:40:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:50.231585 | orchestrator | 2026-01-05 01:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:53.291013 | orchestrator | 2026-01-05 01:40:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:53.292981 | orchestrator | 2026-01-05 01:40:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:53.293043 | orchestrator | 2026-01-05 01:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:56.343876 | orchestrator | 2026-01-05 01:40:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:56.346785 | orchestrator | 2026-01-05 01:40:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:56.346906 | orchestrator | 2026-01-05 01:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:40:59.397400 | orchestrator | 2026-01-05 01:40:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:40:59.398106 | orchestrator | 2026-01-05 01:40:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:40:59.398261 | orchestrator | 2026-01-05 01:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:02.454929 | orchestrator | 2026-01-05 01:41:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:02.458832 | orchestrator | 2026-01-05 01:41:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:02.458975 | orchestrator | 2026-01-05 01:41:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:05.510525 | orchestrator | 2026-01-05 01:41:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:05.514167 | orchestrator | 2026-01-05 01:41:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:05.514392 | orchestrator | 2026-01-05 01:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:08.565728 | orchestrator | 2026-01-05 01:41:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:08.567375 | orchestrator | 2026-01-05 01:41:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:08.567454 | orchestrator | 2026-01-05 01:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:11.612170 | orchestrator | 2026-01-05 01:41:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:11.612246 | orchestrator | 2026-01-05 01:41:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:11.612283 | orchestrator | 2026-01-05 01:41:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:14.659664 | orchestrator | 2026-01-05 01:41:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:14.662003 | orchestrator | 2026-01-05 01:41:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:14.662110 | orchestrator | 2026-01-05 01:41:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:17.721597 | orchestrator | 2026-01-05 01:41:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:17.722622 | orchestrator | 2026-01-05 01:41:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:17.722715 | orchestrator | 2026-01-05 01:41:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:20.779861 | orchestrator | 2026-01-05 01:41:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:20.779997 | orchestrator | 2026-01-05 01:41:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:20.780009 | orchestrator | 2026-01-05 01:41:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:23.826297 | orchestrator | 2026-01-05 01:41:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:23.827144 | orchestrator | 2026-01-05 01:41:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:23.827180 | orchestrator | 2026-01-05 01:41:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:26.887884 | orchestrator | 2026-01-05 01:41:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:26.890182 | orchestrator | 2026-01-05 01:41:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:26.891091 | orchestrator | 2026-01-05 01:41:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:29.942214 | orchestrator | 2026-01-05 01:41:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:29.943862 | orchestrator | 2026-01-05 01:41:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:29.943914 | orchestrator | 2026-01-05 01:41:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:32.988615 | orchestrator | 2026-01-05 01:41:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:32.990676 | orchestrator | 2026-01-05 01:41:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:32.990825 | orchestrator | 2026-01-05 01:41:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:36.042643 | orchestrator | 2026-01-05 01:41:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:36.044768 | orchestrator | 2026-01-05 01:41:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:36.044827 | orchestrator | 2026-01-05 01:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:39.092215 | orchestrator | 2026-01-05 01:41:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:39.094891 | orchestrator | 2026-01-05 01:41:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:39.094965 | orchestrator | 2026-01-05 01:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:42.146208 | orchestrator | 2026-01-05 01:41:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:42.148075 | orchestrator | 2026-01-05 01:41:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:42.148130 | orchestrator | 2026-01-05 01:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:45.205558 | orchestrator | 2026-01-05 01:41:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:45.207503 | orchestrator | 2026-01-05 01:41:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:45.207571 | orchestrator | 2026-01-05 01:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:48.253875 | orchestrator | 2026-01-05 01:41:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:48.255026 | orchestrator | 2026-01-05 01:41:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:48.255051 | orchestrator | 2026-01-05 01:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:51.304716 | orchestrator | 2026-01-05 01:41:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:51.307468 | orchestrator | 2026-01-05 01:41:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:51.307545 | orchestrator | 2026-01-05 01:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:54.354598 | orchestrator | 2026-01-05 01:41:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:54.355345 | orchestrator | 2026-01-05 01:41:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:54.355424 | orchestrator | 2026-01-05 01:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:41:57.391467 | orchestrator | 2026-01-05 01:41:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:41:57.394162 | orchestrator | 2026-01-05 01:41:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:41:57.394261 | orchestrator | 2026-01-05 01:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:00.442323 | orchestrator | 2026-01-05 01:42:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:00.444857 | orchestrator | 2026-01-05 01:42:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:00.444938 | orchestrator | 2026-01-05 01:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:03.485350 | orchestrator | 2026-01-05 01:42:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:03.487921 | orchestrator | 2026-01-05 01:42:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:03.487988 | orchestrator | 2026-01-05 01:42:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:06.543133 | orchestrator | 2026-01-05 01:42:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:06.546351 | orchestrator | 2026-01-05 01:42:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:06.546456 | orchestrator | 2026-01-05 01:42:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:09.602376 | orchestrator | 2026-01-05 01:42:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:09.605257 | orchestrator | 2026-01-05 01:42:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:09.605511 | orchestrator | 2026-01-05 01:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:12.656270 | orchestrator | 2026-01-05 01:42:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:12.659455 | orchestrator | 2026-01-05 01:42:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:12.659540 | orchestrator | 2026-01-05 01:42:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:15.719979 | orchestrator | 2026-01-05 01:42:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:15.723237 | orchestrator | 2026-01-05 01:42:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:15.723311 | orchestrator | 2026-01-05 01:42:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:18.775471 | orchestrator | 2026-01-05 01:42:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:18.777482 | orchestrator | 2026-01-05 01:42:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:18.777732 | orchestrator | 2026-01-05 01:42:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:21.829756 | orchestrator | 2026-01-05 01:42:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:21.831521 | orchestrator | 2026-01-05 01:42:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:21.831649 | orchestrator | 2026-01-05 01:42:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:24.887157 | orchestrator | 2026-01-05 01:42:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:24.888826 | orchestrator | 2026-01-05 01:42:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:24.888872 | orchestrator | 2026-01-05 01:42:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:27.944962 | orchestrator | 2026-01-05 01:42:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:27.945157 | orchestrator | 2026-01-05 01:42:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:27.945170 | orchestrator | 2026-01-05 01:42:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:31.002858 | orchestrator | 2026-01-05 01:42:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:31.005554 | orchestrator | 2026-01-05 01:42:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:31.005613 | orchestrator | 2026-01-05 01:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:34.063131 | orchestrator | 2026-01-05 01:42:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:34.063263 | orchestrator | 2026-01-05 01:42:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:34.063274 | orchestrator | 2026-01-05 01:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:37.113450 | orchestrator | 2026-01-05 01:42:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:37.116362 | orchestrator | 2026-01-05 01:42:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:37.116513 | orchestrator | 2026-01-05 01:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:40.167708 | orchestrator | 2026-01-05 01:42:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:40.169927 | orchestrator | 2026-01-05 01:42:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:40.170000 | orchestrator | 2026-01-05 01:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:43.230566 | orchestrator | 2026-01-05 01:42:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:43.233095 | orchestrator | 2026-01-05 01:42:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:43.233144 | orchestrator | 2026-01-05 01:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:46.290390 | orchestrator | 2026-01-05 01:42:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:46.292205 | orchestrator | 2026-01-05 01:42:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:46.292272 | orchestrator | 2026-01-05 01:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:49.345416 | orchestrator | 2026-01-05 01:42:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:49.347185 | orchestrator | 2026-01-05 01:42:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:49.347254 | orchestrator | 2026-01-05 01:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:52.401526 | orchestrator | 2026-01-05 01:42:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:52.403703 | orchestrator | 2026-01-05 01:42:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:52.403770 | orchestrator | 2026-01-05 01:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:55.452044 | orchestrator | 2026-01-05 01:42:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:55.452138 | orchestrator | 2026-01-05 01:42:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:55.452149 | orchestrator | 2026-01-05 01:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:42:58.509397 | orchestrator | 2026-01-05 01:42:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:42:58.510920 | orchestrator | 2026-01-05 01:42:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:42:58.510997 | orchestrator | 2026-01-05 01:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:01.560857 | orchestrator | 2026-01-05 01:43:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:01.561078 | orchestrator | 2026-01-05 01:43:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:01.561104 | orchestrator | 2026-01-05 01:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:04.616568 | orchestrator | 2026-01-05 01:43:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:04.620227 | orchestrator | 2026-01-05 01:43:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:04.620294 | orchestrator | 2026-01-05 01:43:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:07.670972 | orchestrator | 2026-01-05 01:43:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:07.672763 | orchestrator | 2026-01-05 01:43:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:07.672784 | orchestrator | 2026-01-05 01:43:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:10.719079 | orchestrator | 2026-01-05 01:43:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:10.720649 | orchestrator | 2026-01-05 01:43:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:10.720743 | orchestrator | 2026-01-05 01:43:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:13.765990 | orchestrator | 2026-01-05 01:43:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:13.766310 | orchestrator | 2026-01-05 01:43:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:13.766345 | orchestrator | 2026-01-05 01:43:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:16.811185 | orchestrator | 2026-01-05 01:43:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:16.813250 | orchestrator | 2026-01-05 01:43:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:16.813296 | orchestrator | 2026-01-05 01:43:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:19.856611 | orchestrator | 2026-01-05 01:43:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:19.858627 | orchestrator | 2026-01-05 01:43:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:19.858777 | orchestrator | 2026-01-05 01:43:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:22.913737 | orchestrator | 2026-01-05 01:43:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:22.915487 | orchestrator | 2026-01-05 01:43:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:22.915541 | orchestrator | 2026-01-05 01:43:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:25.972722 | orchestrator | 2026-01-05 01:43:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:25.973852 | orchestrator | 2026-01-05 01:43:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:25.973905 | orchestrator | 2026-01-05 01:43:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:29.019864 | orchestrator | 2026-01-05 01:43:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:29.023523 | orchestrator | 2026-01-05 01:43:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:29.024173 | orchestrator | 2026-01-05 01:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:32.072861 | orchestrator | 2026-01-05 01:43:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:32.075261 | orchestrator | 2026-01-05 01:43:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:32.075336 | orchestrator | 2026-01-05 01:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:35.130915 | orchestrator | 2026-01-05 01:43:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:35.132668 | orchestrator | 2026-01-05 01:43:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:35.132728 | orchestrator | 2026-01-05 01:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:38.178413 | orchestrator | 2026-01-05 01:43:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:38.180274 | orchestrator | 2026-01-05 01:43:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:38.180327 | orchestrator | 2026-01-05 01:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:41.236857 | orchestrator | 2026-01-05 01:43:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:41.239335 | orchestrator | 2026-01-05 01:43:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:41.239447 | orchestrator | 2026-01-05 01:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:44.288425 | orchestrator | 2026-01-05 01:43:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:44.288583 | orchestrator | 2026-01-05 01:43:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:44.288611 | orchestrator | 2026-01-05 01:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:47.336977 | orchestrator | 2026-01-05 01:43:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:47.337875 | orchestrator | 2026-01-05 01:43:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:47.337938 | orchestrator | 2026-01-05 01:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:50.396453 | orchestrator | 2026-01-05 01:43:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:50.398007 | orchestrator | 2026-01-05 01:43:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:50.398050 | orchestrator | 2026-01-05 01:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:53.439223 | orchestrator | 2026-01-05 01:43:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:53.443118 | orchestrator | 2026-01-05 01:43:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:53.443230 | orchestrator | 2026-01-05 01:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:56.496523 | orchestrator | 2026-01-05 01:43:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:56.499523 | orchestrator | 2026-01-05 01:43:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:56.499577 | orchestrator | 2026-01-05 01:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:43:59.545175 | orchestrator | 2026-01-05 01:43:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:43:59.547055 | orchestrator | 2026-01-05 01:43:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:43:59.547116 | orchestrator | 2026-01-05 01:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:02.597387 | orchestrator | 2026-01-05 01:44:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:02.599317 | orchestrator | 2026-01-05 01:44:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:02.599417 | orchestrator | 2026-01-05 01:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:05.650854 | orchestrator | 2026-01-05 01:44:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:05.651164 | orchestrator | 2026-01-05 01:44:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:05.651219 | orchestrator | 2026-01-05 01:44:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:08.703482 | orchestrator | 2026-01-05 01:44:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:08.705668 | orchestrator | 2026-01-05 01:44:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:08.706002 | orchestrator | 2026-01-05 01:44:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:11.759058 | orchestrator | 2026-01-05 01:44:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:11.760731 | orchestrator | 2026-01-05 01:44:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:11.760792 | orchestrator | 2026-01-05 01:44:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:14.812862 | orchestrator | 2026-01-05 01:44:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:14.813154 | orchestrator | 2026-01-05 01:44:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:14.813185 | orchestrator | 2026-01-05 01:44:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:17.868730 | orchestrator | 2026-01-05 01:44:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:17.870796 | orchestrator | 2026-01-05 01:44:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:17.870883 | orchestrator | 2026-01-05 01:44:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:20.925909 | orchestrator | 2026-01-05 01:44:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:20.928310 | orchestrator | 2026-01-05 01:44:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:20.928425 | orchestrator | 2026-01-05 01:44:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:23.983221 | orchestrator | 2026-01-05 01:44:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:23.984636 | orchestrator | 2026-01-05 01:44:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:23.984683 | orchestrator | 2026-01-05 01:44:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:27.033676 | orchestrator | 2026-01-05 01:44:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:27.036799 | orchestrator | 2026-01-05 01:44:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:27.036879 | orchestrator | 2026-01-05 01:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:30.077709 | orchestrator | 2026-01-05 01:44:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:30.078792 | orchestrator | 2026-01-05 01:44:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:30.079179 | orchestrator | 2026-01-05 01:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:33.121340 | orchestrator | 2026-01-05 01:44:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:33.122972 | orchestrator | 2026-01-05 01:44:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:33.123022 | orchestrator | 2026-01-05 01:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:36.175761 | orchestrator | 2026-01-05 01:44:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:36.178198 | orchestrator | 2026-01-05 01:44:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:36.178264 | orchestrator | 2026-01-05 01:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:39.224863 | orchestrator | 2026-01-05 01:44:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:39.228071 | orchestrator | 2026-01-05 01:44:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:39.228143 | orchestrator | 2026-01-05 01:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:42.276700 | orchestrator | 2026-01-05 01:44:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:42.279093 | orchestrator | 2026-01-05 01:44:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:42.279155 | orchestrator | 2026-01-05 01:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:45.325065 | orchestrator | 2026-01-05 01:44:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:45.326182 | orchestrator | 2026-01-05 01:44:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:45.326222 | orchestrator | 2026-01-05 01:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:48.371166 | orchestrator | 2026-01-05 01:44:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:48.373484 | orchestrator | 2026-01-05 01:44:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:48.373607 | orchestrator | 2026-01-05 01:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:51.423024 | orchestrator | 2026-01-05 01:44:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:51.424649 | orchestrator | 2026-01-05 01:44:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:51.424830 | orchestrator | 2026-01-05 01:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:54.476166 | orchestrator | 2026-01-05 01:44:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:54.479540 | orchestrator | 2026-01-05 01:44:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:54.479660 | orchestrator | 2026-01-05 01:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:44:57.526734 | orchestrator | 2026-01-05 01:44:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:44:57.528468 | orchestrator | 2026-01-05 01:44:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:44:57.528742 | orchestrator | 2026-01-05 01:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:00.572889 | orchestrator | 2026-01-05 01:45:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:00.574618 | orchestrator | 2026-01-05 01:45:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:00.574679 | orchestrator | 2026-01-05 01:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:03.623616 | orchestrator | 2026-01-05 01:45:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:03.625456 | orchestrator | 2026-01-05 01:45:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:03.625514 | orchestrator | 2026-01-05 01:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:06.670358 | orchestrator | 2026-01-05 01:45:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:06.670995 | orchestrator | 2026-01-05 01:45:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:06.671058 | orchestrator | 2026-01-05 01:45:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:09.720199 | orchestrator | 2026-01-05 01:45:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:09.721217 | orchestrator | 2026-01-05 01:45:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:09.721307 | orchestrator | 2026-01-05 01:45:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:12.777978 | orchestrator | 2026-01-05 01:45:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:12.780326 | orchestrator | 2026-01-05 01:45:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:12.780392 | orchestrator | 2026-01-05 01:45:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:15.830365 | orchestrator | 2026-01-05 01:45:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:15.834556 | orchestrator | 2026-01-05 01:45:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:15.834698 | orchestrator | 2026-01-05 01:45:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:18.890282 | orchestrator | 2026-01-05 01:45:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:18.892235 | orchestrator | 2026-01-05 01:45:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:18.892288 | orchestrator | 2026-01-05 01:45:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:21.941273 | orchestrator | 2026-01-05 01:45:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:21.944390 | orchestrator | 2026-01-05 01:45:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:21.944454 | orchestrator | 2026-01-05 01:45:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:25.002093 | orchestrator | 2026-01-05 01:45:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:25.003456 | orchestrator | 2026-01-05 01:45:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:25.003562 | orchestrator | 2026-01-05 01:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:28.049892 | orchestrator | 2026-01-05 01:45:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:28.052646 | orchestrator | 2026-01-05 01:45:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:28.052724 | orchestrator | 2026-01-05 01:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:31.098496 | orchestrator | 2026-01-05 01:45:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:31.099120 | orchestrator | 2026-01-05 01:45:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:31.099243 | orchestrator | 2026-01-05 01:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:34.141169 | orchestrator | 2026-01-05 01:45:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:34.142983 | orchestrator | 2026-01-05 01:45:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:34.143077 | orchestrator | 2026-01-05 01:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:37.194257 | orchestrator | 2026-01-05 01:45:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:37.195458 | orchestrator | 2026-01-05 01:45:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:37.195490 | orchestrator | 2026-01-05 01:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:40.243554 | orchestrator | 2026-01-05 01:45:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:40.243994 | orchestrator | 2026-01-05 01:45:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:40.244017 | orchestrator | 2026-01-05 01:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:43.289556 | orchestrator | 2026-01-05 01:45:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:43.292930 | orchestrator | 2026-01-05 01:45:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:43.293009 | orchestrator | 2026-01-05 01:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:46.342682 | orchestrator | 2026-01-05 01:45:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:46.345298 | orchestrator | 2026-01-05 01:45:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:46.345364 | orchestrator | 2026-01-05 01:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:49.395202 | orchestrator | 2026-01-05 01:45:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:49.397734 | orchestrator | 2026-01-05 01:45:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:49.397809 | orchestrator | 2026-01-05 01:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:52.454427 | orchestrator | 2026-01-05 01:45:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:52.457425 | orchestrator | 2026-01-05 01:45:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:52.457506 | orchestrator | 2026-01-05 01:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:55.505300 | orchestrator | 2026-01-05 01:45:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:55.508029 | orchestrator | 2026-01-05 01:45:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:55.508078 | orchestrator | 2026-01-05 01:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:45:58.552863 | orchestrator | 2026-01-05 01:45:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:45:58.554369 | orchestrator | 2026-01-05 01:45:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:45:58.554399 | orchestrator | 2026-01-05 01:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:01.601561 | orchestrator | 2026-01-05 01:46:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:01.603487 | orchestrator | 2026-01-05 01:46:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:01.603544 | orchestrator | 2026-01-05 01:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:04.654504 | orchestrator | 2026-01-05 01:46:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:04.656928 | orchestrator | 2026-01-05 01:46:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:04.656991 | orchestrator | 2026-01-05 01:46:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:07.711610 | orchestrator | 2026-01-05 01:46:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:07.713774 | orchestrator | 2026-01-05 01:46:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:07.713818 | orchestrator | 2026-01-05 01:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:10.764395 | orchestrator | 2026-01-05 01:46:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:10.765774 | orchestrator | 2026-01-05 01:46:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:10.765826 | orchestrator | 2026-01-05 01:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:13.804411 | orchestrator | 2026-01-05 01:46:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:13.806783 | orchestrator | 2026-01-05 01:46:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:13.806851 | orchestrator | 2026-01-05 01:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:16.851467 | orchestrator | 2026-01-05 01:46:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:16.854237 | orchestrator | 2026-01-05 01:46:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:16.854299 | orchestrator | 2026-01-05 01:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:19.901039 | orchestrator | 2026-01-05 01:46:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:19.905249 | orchestrator | 2026-01-05 01:46:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:19.906132 | orchestrator | 2026-01-05 01:46:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:22.959611 | orchestrator | 2026-01-05 01:46:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:22.961857 | orchestrator | 2026-01-05 01:46:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:22.961931 | orchestrator | 2026-01-05 01:46:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:26.018668 | orchestrator | 2026-01-05 01:46:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:26.021260 | orchestrator | 2026-01-05 01:46:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:26.021323 | orchestrator | 2026-01-05 01:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:29.073202 | orchestrator | 2026-01-05 01:46:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:29.075078 | orchestrator | 2026-01-05 01:46:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:29.075138 | orchestrator | 2026-01-05 01:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:32.118969 | orchestrator | 2026-01-05 01:46:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:32.119606 | orchestrator | 2026-01-05 01:46:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:32.119680 | orchestrator | 2026-01-05 01:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:35.167411 | orchestrator | 2026-01-05 01:46:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:35.170184 | orchestrator | 2026-01-05 01:46:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:35.170257 | orchestrator | 2026-01-05 01:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:38.219170 | orchestrator | 2026-01-05 01:46:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:38.220564 | orchestrator | 2026-01-05 01:46:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:38.220621 | orchestrator | 2026-01-05 01:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:41.270183 | orchestrator | 2026-01-05 01:46:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:41.271038 | orchestrator | 2026-01-05 01:46:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:41.271408 | orchestrator | 2026-01-05 01:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:44.316185 | orchestrator | 2026-01-05 01:46:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:44.316350 | orchestrator | 2026-01-05 01:46:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:44.316369 | orchestrator | 2026-01-05 01:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:47.357364 | orchestrator | 2026-01-05 01:46:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:47.359580 | orchestrator | 2026-01-05 01:46:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:47.360574 | orchestrator | 2026-01-05 01:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:50.409978 | orchestrator | 2026-01-05 01:46:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:50.411888 | orchestrator | 2026-01-05 01:46:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:50.411927 | orchestrator | 2026-01-05 01:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:53.460612 | orchestrator | 2026-01-05 01:46:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:53.462928 | orchestrator | 2026-01-05 01:46:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:53.463001 | orchestrator | 2026-01-05 01:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:56.515857 | orchestrator | 2026-01-05 01:46:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:56.517797 | orchestrator | 2026-01-05 01:46:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:56.517848 | orchestrator | 2026-01-05 01:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:46:59.564097 | orchestrator | 2026-01-05 01:46:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:46:59.565769 | orchestrator | 2026-01-05 01:46:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:46:59.565813 | orchestrator | 2026-01-05 01:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:02.615808 | orchestrator | 2026-01-05 01:47:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:02.617984 | orchestrator | 2026-01-05 01:47:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:02.618098 | orchestrator | 2026-01-05 01:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:05.663627 | orchestrator | 2026-01-05 01:47:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:05.665504 | orchestrator | 2026-01-05 01:47:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:05.665560 | orchestrator | 2026-01-05 01:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:08.713514 | orchestrator | 2026-01-05 01:47:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:08.714512 | orchestrator | 2026-01-05 01:47:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:08.714560 | orchestrator | 2026-01-05 01:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:11.760950 | orchestrator | 2026-01-05 01:47:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:11.762477 | orchestrator | 2026-01-05 01:47:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:11.762540 | orchestrator | 2026-01-05 01:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:14.814359 | orchestrator | 2026-01-05 01:47:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:14.816849 | orchestrator | 2026-01-05 01:47:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:14.816893 | orchestrator | 2026-01-05 01:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:17.866592 | orchestrator | 2026-01-05 01:47:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:17.869171 | orchestrator | 2026-01-05 01:47:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:17.869400 | orchestrator | 2026-01-05 01:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:20.918427 | orchestrator | 2026-01-05 01:47:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:20.920514 | orchestrator | 2026-01-05 01:47:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:20.920588 | orchestrator | 2026-01-05 01:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:23.974135 | orchestrator | 2026-01-05 01:47:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:23.975283 | orchestrator | 2026-01-05 01:47:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:23.975373 | orchestrator | 2026-01-05 01:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:27.037098 | orchestrator | 2026-01-05 01:47:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:27.039003 | orchestrator | 2026-01-05 01:47:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:27.039073 | orchestrator | 2026-01-05 01:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:30.078746 | orchestrator | 2026-01-05 01:47:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:30.082612 | orchestrator | 2026-01-05 01:47:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:30.082852 | orchestrator | 2026-01-05 01:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:33.131940 | orchestrator | 2026-01-05 01:47:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:33.132546 | orchestrator | 2026-01-05 01:47:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:33.132571 | orchestrator | 2026-01-05 01:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:36.176213 | orchestrator | 2026-01-05 01:47:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:36.178248 | orchestrator | 2026-01-05 01:47:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:36.178303 | orchestrator | 2026-01-05 01:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:39.223987 | orchestrator | 2026-01-05 01:47:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:39.226281 | orchestrator | 2026-01-05 01:47:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:39.226346 | orchestrator | 2026-01-05 01:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:42.280676 | orchestrator | 2026-01-05 01:47:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:42.284038 | orchestrator | 2026-01-05 01:47:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:42.284110 | orchestrator | 2026-01-05 01:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:45.329629 | orchestrator | 2026-01-05 01:47:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:45.330922 | orchestrator | 2026-01-05 01:47:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:45.330983 | orchestrator | 2026-01-05 01:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:48.379284 | orchestrator | 2026-01-05 01:47:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:48.380595 | orchestrator | 2026-01-05 01:47:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:48.380654 | orchestrator | 2026-01-05 01:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:51.432424 | orchestrator | 2026-01-05 01:47:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:51.433916 | orchestrator | 2026-01-05 01:47:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:51.433964 | orchestrator | 2026-01-05 01:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:54.479562 | orchestrator | 2026-01-05 01:47:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:54.481749 | orchestrator | 2026-01-05 01:47:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:54.481834 | orchestrator | 2026-01-05 01:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:47:57.533617 | orchestrator | 2026-01-05 01:47:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:47:57.536994 | orchestrator | 2026-01-05 01:47:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:47:57.537078 | orchestrator | 2026-01-05 01:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:00.584416 | orchestrator | 2026-01-05 01:48:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:00.585511 | orchestrator | 2026-01-05 01:48:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:00.585543 | orchestrator | 2026-01-05 01:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:03.636449 | orchestrator | 2026-01-05 01:48:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:03.637678 | orchestrator | 2026-01-05 01:48:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:03.637745 | orchestrator | 2026-01-05 01:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:06.686862 | orchestrator | 2026-01-05 01:48:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:06.689281 | orchestrator | 2026-01-05 01:48:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:06.689355 | orchestrator | 2026-01-05 01:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:09.742801 | orchestrator | 2026-01-05 01:48:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:09.744666 | orchestrator | 2026-01-05 01:48:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:09.744808 | orchestrator | 2026-01-05 01:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:12.797204 | orchestrator | 2026-01-05 01:48:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:12.798901 | orchestrator | 2026-01-05 01:48:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:12.798969 | orchestrator | 2026-01-05 01:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:15.849901 | orchestrator | 2026-01-05 01:48:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:15.851368 | orchestrator | 2026-01-05 01:48:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:15.851410 | orchestrator | 2026-01-05 01:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:18.895982 | orchestrator | 2026-01-05 01:48:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:18.898462 | orchestrator | 2026-01-05 01:48:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:18.898504 | orchestrator | 2026-01-05 01:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:21.942769 | orchestrator | 2026-01-05 01:48:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:21.947548 | orchestrator | 2026-01-05 01:48:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:21.947954 | orchestrator | 2026-01-05 01:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:24.992517 | orchestrator | 2026-01-05 01:48:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:24.994504 | orchestrator | 2026-01-05 01:48:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:24.994553 | orchestrator | 2026-01-05 01:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:28.035862 | orchestrator | 2026-01-05 01:48:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:28.037970 | orchestrator | 2026-01-05 01:48:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:28.038062 | orchestrator | 2026-01-05 01:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:31.092672 | orchestrator | 2026-01-05 01:48:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:31.094629 | orchestrator | 2026-01-05 01:48:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:31.094717 | orchestrator | 2026-01-05 01:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:34.137256 | orchestrator | 2026-01-05 01:48:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:34.139384 | orchestrator | 2026-01-05 01:48:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:34.139445 | orchestrator | 2026-01-05 01:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:37.186461 | orchestrator | 2026-01-05 01:48:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:37.187894 | orchestrator | 2026-01-05 01:48:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:37.187917 | orchestrator | 2026-01-05 01:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:40.236480 | orchestrator | 2026-01-05 01:48:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:40.238597 | orchestrator | 2026-01-05 01:48:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:40.238655 | orchestrator | 2026-01-05 01:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:43.284721 | orchestrator | 2026-01-05 01:48:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:43.286178 | orchestrator | 2026-01-05 01:48:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:43.286250 | orchestrator | 2026-01-05 01:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:46.333204 | orchestrator | 2026-01-05 01:48:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:46.334531 | orchestrator | 2026-01-05 01:48:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:46.334905 | orchestrator | 2026-01-05 01:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:49.384472 | orchestrator | 2026-01-05 01:48:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:49.386170 | orchestrator | 2026-01-05 01:48:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:49.386208 | orchestrator | 2026-01-05 01:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:52.435663 | orchestrator | 2026-01-05 01:48:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:52.437977 | orchestrator | 2026-01-05 01:48:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:52.438010 | orchestrator | 2026-01-05 01:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:55.486350 | orchestrator | 2026-01-05 01:48:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:55.488227 | orchestrator | 2026-01-05 01:48:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:55.488281 | orchestrator | 2026-01-05 01:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:48:58.537354 | orchestrator | 2026-01-05 01:48:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:48:58.538853 | orchestrator | 2026-01-05 01:48:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:48:58.538935 | orchestrator | 2026-01-05 01:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:01.584449 | orchestrator | 2026-01-05 01:49:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:01.586719 | orchestrator | 2026-01-05 01:49:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:01.586846 | orchestrator | 2026-01-05 01:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:04.634821 | orchestrator | 2026-01-05 01:49:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:04.636282 | orchestrator | 2026-01-05 01:49:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:04.636327 | orchestrator | 2026-01-05 01:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:07.685334 | orchestrator | 2026-01-05 01:49:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:07.688147 | orchestrator | 2026-01-05 01:49:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:07.688237 | orchestrator | 2026-01-05 01:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:10.734730 | orchestrator | 2026-01-05 01:49:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:10.736394 | orchestrator | 2026-01-05 01:49:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:10.736526 | orchestrator | 2026-01-05 01:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:13.780757 | orchestrator | 2026-01-05 01:49:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:13.782411 | orchestrator | 2026-01-05 01:49:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:13.782494 | orchestrator | 2026-01-05 01:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:16.831460 | orchestrator | 2026-01-05 01:49:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:16.834668 | orchestrator | 2026-01-05 01:49:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:16.834722 | orchestrator | 2026-01-05 01:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:19.883276 | orchestrator | 2026-01-05 01:49:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:19.885454 | orchestrator | 2026-01-05 01:49:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:19.885532 | orchestrator | 2026-01-05 01:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:22.929582 | orchestrator | 2026-01-05 01:49:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:22.931334 | orchestrator | 2026-01-05 01:49:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:22.931474 | orchestrator | 2026-01-05 01:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:25.983893 | orchestrator | 2026-01-05 01:49:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:25.986606 | orchestrator | 2026-01-05 01:49:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:25.986662 | orchestrator | 2026-01-05 01:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:29.048301 | orchestrator | 2026-01-05 01:49:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:29.050585 | orchestrator | 2026-01-05 01:49:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:29.051048 | orchestrator | 2026-01-05 01:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:32.090151 | orchestrator | 2026-01-05 01:49:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:32.091234 | orchestrator | 2026-01-05 01:49:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:32.091300 | orchestrator | 2026-01-05 01:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:35.129842 | orchestrator | 2026-01-05 01:49:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:35.131254 | orchestrator | 2026-01-05 01:49:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:35.131306 | orchestrator | 2026-01-05 01:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:38.178338 | orchestrator | 2026-01-05 01:49:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:38.179136 | orchestrator | 2026-01-05 01:49:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:38.179174 | orchestrator | 2026-01-05 01:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:41.228685 | orchestrator | 2026-01-05 01:49:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:41.230335 | orchestrator | 2026-01-05 01:49:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:41.230403 | orchestrator | 2026-01-05 01:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:44.281296 | orchestrator | 2026-01-05 01:49:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:44.281786 | orchestrator | 2026-01-05 01:49:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:44.281911 | orchestrator | 2026-01-05 01:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:47.329728 | orchestrator | 2026-01-05 01:49:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:47.332016 | orchestrator | 2026-01-05 01:49:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:47.332048 | orchestrator | 2026-01-05 01:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:50.378489 | orchestrator | 2026-01-05 01:49:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:50.379611 | orchestrator | 2026-01-05 01:49:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:50.380014 | orchestrator | 2026-01-05 01:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:53.428408 | orchestrator | 2026-01-05 01:49:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:53.430305 | orchestrator | 2026-01-05 01:49:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:53.430365 | orchestrator | 2026-01-05 01:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:56.475467 | orchestrator | 2026-01-05 01:49:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:56.476783 | orchestrator | 2026-01-05 01:49:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:56.476888 | orchestrator | 2026-01-05 01:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:49:59.517057 | orchestrator | 2026-01-05 01:49:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:49:59.517717 | orchestrator | 2026-01-05 01:49:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:49:59.517838 | orchestrator | 2026-01-05 01:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:02.569878 | orchestrator | 2026-01-05 01:50:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:02.573217 | orchestrator | 2026-01-05 01:50:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:02.573296 | orchestrator | 2026-01-05 01:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:05.624015 | orchestrator | 2026-01-05 01:50:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:05.626795 | orchestrator | 2026-01-05 01:50:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:05.626879 | orchestrator | 2026-01-05 01:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:08.677376 | orchestrator | 2026-01-05 01:50:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:08.680116 | orchestrator | 2026-01-05 01:50:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:08.680217 | orchestrator | 2026-01-05 01:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:11.721552 | orchestrator | 2026-01-05 01:50:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:11.723043 | orchestrator | 2026-01-05 01:50:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:11.723093 | orchestrator | 2026-01-05 01:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:14.771210 | orchestrator | 2026-01-05 01:50:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:14.772887 | orchestrator | 2026-01-05 01:50:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:14.772984 | orchestrator | 2026-01-05 01:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:17.818992 | orchestrator | 2026-01-05 01:50:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:17.820909 | orchestrator | 2026-01-05 01:50:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:17.820962 | orchestrator | 2026-01-05 01:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:20.870447 | orchestrator | 2026-01-05 01:50:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:20.872896 | orchestrator | 2026-01-05 01:50:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:20.873032 | orchestrator | 2026-01-05 01:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:23.919889 | orchestrator | 2026-01-05 01:50:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:23.921673 | orchestrator | 2026-01-05 01:50:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:23.921749 | orchestrator | 2026-01-05 01:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:26.976964 | orchestrator | 2026-01-05 01:50:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:26.978447 | orchestrator | 2026-01-05 01:50:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:26.978505 | orchestrator | 2026-01-05 01:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:30.041056 | orchestrator | 2026-01-05 01:50:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:30.042671 | orchestrator | 2026-01-05 01:50:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:30.042750 | orchestrator | 2026-01-05 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:33.087677 | orchestrator | 2026-01-05 01:50:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:33.089277 | orchestrator | 2026-01-05 01:50:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:33.089327 | orchestrator | 2026-01-05 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:36.135519 | orchestrator | 2026-01-05 01:50:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:36.137902 | orchestrator | 2026-01-05 01:50:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:36.137954 | orchestrator | 2026-01-05 01:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:39.186200 | orchestrator | 2026-01-05 01:50:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:39.189312 | orchestrator | 2026-01-05 01:50:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:39.189378 | orchestrator | 2026-01-05 01:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:42.240280 | orchestrator | 2026-01-05 01:50:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:42.241810 | orchestrator | 2026-01-05 01:50:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:42.241950 | orchestrator | 2026-01-05 01:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:45.293766 | orchestrator | 2026-01-05 01:50:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:45.295510 | orchestrator | 2026-01-05 01:50:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:45.295566 | orchestrator | 2026-01-05 01:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:48.339302 | orchestrator | 2026-01-05 01:50:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:48.341014 | orchestrator | 2026-01-05 01:50:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:48.341062 | orchestrator | 2026-01-05 01:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:51.388952 | orchestrator | 2026-01-05 01:50:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:51.390743 | orchestrator | 2026-01-05 01:50:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:51.390815 | orchestrator | 2026-01-05 01:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:54.443575 | orchestrator | 2026-01-05 01:50:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:54.446088 | orchestrator | 2026-01-05 01:50:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:54.446154 | orchestrator | 2026-01-05 01:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:50:57.486062 | orchestrator | 2026-01-05 01:50:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:50:57.487661 | orchestrator | 2026-01-05 01:50:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:50:57.487747 | orchestrator | 2026-01-05 01:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:00.532315 | orchestrator | 2026-01-05 01:51:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:00.533832 | orchestrator | 2026-01-05 01:51:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:00.533930 | orchestrator | 2026-01-05 01:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:03.584346 | orchestrator | 2026-01-05 01:51:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:03.586665 | orchestrator | 2026-01-05 01:51:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:03.586735 | orchestrator | 2026-01-05 01:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:06.638517 | orchestrator | 2026-01-05 01:51:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:06.641232 | orchestrator | 2026-01-05 01:51:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:06.641351 | orchestrator | 2026-01-05 01:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:09.697626 | orchestrator | 2026-01-05 01:51:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:09.699650 | orchestrator | 2026-01-05 01:51:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:09.699807 | orchestrator | 2026-01-05 01:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:12.750533 | orchestrator | 2026-01-05 01:51:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:12.752512 | orchestrator | 2026-01-05 01:51:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:12.752683 | orchestrator | 2026-01-05 01:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:15.802535 | orchestrator | 2026-01-05 01:51:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:15.803258 | orchestrator | 2026-01-05 01:51:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:15.803275 | orchestrator | 2026-01-05 01:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:18.849261 | orchestrator | 2026-01-05 01:51:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:18.853079 | orchestrator | 2026-01-05 01:51:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:18.853175 | orchestrator | 2026-01-05 01:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:21.902837 | orchestrator | 2026-01-05 01:51:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:21.903785 | orchestrator | 2026-01-05 01:51:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:21.903826 | orchestrator | 2026-01-05 01:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:24.953899 | orchestrator | 2026-01-05 01:51:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:24.955806 | orchestrator | 2026-01-05 01:51:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:24.955960 | orchestrator | 2026-01-05 01:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:28.005406 | orchestrator | 2026-01-05 01:51:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:28.006932 | orchestrator | 2026-01-05 01:51:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:28.007032 | orchestrator | 2026-01-05 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:31.062800 | orchestrator | 2026-01-05 01:51:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:31.065119 | orchestrator | 2026-01-05 01:51:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:31.065190 | orchestrator | 2026-01-05 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:34.113457 | orchestrator | 2026-01-05 01:51:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:34.116491 | orchestrator | 2026-01-05 01:51:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:34.116553 | orchestrator | 2026-01-05 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:37.160491 | orchestrator | 2026-01-05 01:51:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:37.161950 | orchestrator | 2026-01-05 01:51:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:37.161998 | orchestrator | 2026-01-05 01:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:40.213427 | orchestrator | 2026-01-05 01:51:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:40.215689 | orchestrator | 2026-01-05 01:51:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:40.215773 | orchestrator | 2026-01-05 01:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:43.263857 | orchestrator | 2026-01-05 01:51:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:43.265752 | orchestrator | 2026-01-05 01:51:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:43.265851 | orchestrator | 2026-01-05 01:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:46.314226 | orchestrator | 2026-01-05 01:51:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:46.316813 | orchestrator | 2026-01-05 01:51:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:46.316947 | orchestrator | 2026-01-05 01:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:49.362561 | orchestrator | 2026-01-05 01:51:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:49.362802 | orchestrator | 2026-01-05 01:51:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:49.362895 | orchestrator | 2026-01-05 01:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:52.413158 | orchestrator | 2026-01-05 01:51:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:52.415547 | orchestrator | 2026-01-05 01:51:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:52.415615 | orchestrator | 2026-01-05 01:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:55.466586 | orchestrator | 2026-01-05 01:51:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:55.467576 | orchestrator | 2026-01-05 01:51:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:55.467626 | orchestrator | 2026-01-05 01:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:51:58.517795 | orchestrator | 2026-01-05 01:51:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:51:58.520003 | orchestrator | 2026-01-05 01:51:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:51:58.520077 | orchestrator | 2026-01-05 01:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:01.570008 | orchestrator | 2026-01-05 01:52:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:01.571631 | orchestrator | 2026-01-05 01:52:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:01.571711 | orchestrator | 2026-01-05 01:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:04.617981 | orchestrator | 2026-01-05 01:52:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:04.619088 | orchestrator | 2026-01-05 01:52:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:04.619167 | orchestrator | 2026-01-05 01:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:07.665006 | orchestrator | 2026-01-05 01:52:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:07.666461 | orchestrator | 2026-01-05 01:52:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:07.666510 | orchestrator | 2026-01-05 01:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:10.722217 | orchestrator | 2026-01-05 01:52:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:10.724184 | orchestrator | 2026-01-05 01:52:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:10.724258 | orchestrator | 2026-01-05 01:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:13.774400 | orchestrator | 2026-01-05 01:52:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:13.776299 | orchestrator | 2026-01-05 01:52:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:13.776354 | orchestrator | 2026-01-05 01:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:16.821301 | orchestrator | 2026-01-05 01:52:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:16.825718 | orchestrator | 2026-01-05 01:52:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:16.825811 | orchestrator | 2026-01-05 01:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:19.877568 | orchestrator | 2026-01-05 01:52:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:19.880552 | orchestrator | 2026-01-05 01:52:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:19.880618 | orchestrator | 2026-01-05 01:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:22.927538 | orchestrator | 2026-01-05 01:52:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:22.931068 | orchestrator | 2026-01-05 01:52:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:22.931140 | orchestrator | 2026-01-05 01:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:25.984750 | orchestrator | 2026-01-05 01:52:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:25.990483 | orchestrator | 2026-01-05 01:52:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:25.990578 | orchestrator | 2026-01-05 01:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:29.039130 | orchestrator | 2026-01-05 01:52:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:29.040801 | orchestrator | 2026-01-05 01:52:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:29.040865 | orchestrator | 2026-01-05 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:32.092420 | orchestrator | 2026-01-05 01:52:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:32.094847 | orchestrator | 2026-01-05 01:52:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:32.094946 | orchestrator | 2026-01-05 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:35.148532 | orchestrator | 2026-01-05 01:52:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:35.150593 | orchestrator | 2026-01-05 01:52:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:35.150712 | orchestrator | 2026-01-05 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:38.201052 | orchestrator | 2026-01-05 01:52:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:38.202963 | orchestrator | 2026-01-05 01:52:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:38.203023 | orchestrator | 2026-01-05 01:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:41.263768 | orchestrator | 2026-01-05 01:52:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:41.265372 | orchestrator | 2026-01-05 01:52:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:41.265420 | orchestrator | 2026-01-05 01:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:44.307701 | orchestrator | 2026-01-05 01:52:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:44.307896 | orchestrator | 2026-01-05 01:52:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:44.307913 | orchestrator | 2026-01-05 01:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:47.348870 | orchestrator | 2026-01-05 01:52:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:47.351197 | orchestrator | 2026-01-05 01:52:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:47.351274 | orchestrator | 2026-01-05 01:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:50.399753 | orchestrator | 2026-01-05 01:52:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:50.401106 | orchestrator | 2026-01-05 01:52:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:50.401171 | orchestrator | 2026-01-05 01:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:53.450819 | orchestrator | 2026-01-05 01:52:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:53.451666 | orchestrator | 2026-01-05 01:52:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:53.451720 | orchestrator | 2026-01-05 01:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:56.499133 | orchestrator | 2026-01-05 01:52:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:56.501146 | orchestrator | 2026-01-05 01:52:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:56.501213 | orchestrator | 2026-01-05 01:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:52:59.540775 | orchestrator | 2026-01-05 01:52:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:52:59.542730 | orchestrator | 2026-01-05 01:52:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:52:59.542905 | orchestrator | 2026-01-05 01:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:02.586354 | orchestrator | 2026-01-05 01:53:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:02.587658 | orchestrator | 2026-01-05 01:53:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:02.587716 | orchestrator | 2026-01-05 01:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:05.634589 | orchestrator | 2026-01-05 01:53:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:05.636433 | orchestrator | 2026-01-05 01:53:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:05.636545 | orchestrator | 2026-01-05 01:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:08.689077 | orchestrator | 2026-01-05 01:53:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:08.690661 | orchestrator | 2026-01-05 01:53:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:08.690752 | orchestrator | 2026-01-05 01:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:11.741919 | orchestrator | 2026-01-05 01:53:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:11.744135 | orchestrator | 2026-01-05 01:53:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:11.744210 | orchestrator | 2026-01-05 01:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:14.799139 | orchestrator | 2026-01-05 01:53:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:14.801442 | orchestrator | 2026-01-05 01:53:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:14.801556 | orchestrator | 2026-01-05 01:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:17.848826 | orchestrator | 2026-01-05 01:53:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:17.850185 | orchestrator | 2026-01-05 01:53:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:17.850229 | orchestrator | 2026-01-05 01:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:20.901779 | orchestrator | 2026-01-05 01:53:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:20.904953 | orchestrator | 2026-01-05 01:53:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:20.905017 | orchestrator | 2026-01-05 01:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:23.950287 | orchestrator | 2026-01-05 01:53:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:23.953371 | orchestrator | 2026-01-05 01:53:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:23.953518 | orchestrator | 2026-01-05 01:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:26.992611 | orchestrator | 2026-01-05 01:53:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:26.995922 | orchestrator | 2026-01-05 01:53:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:26.996240 | orchestrator | 2026-01-05 01:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:30.045456 | orchestrator | 2026-01-05 01:53:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:30.047554 | orchestrator | 2026-01-05 01:53:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:30.047632 | orchestrator | 2026-01-05 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:33.099534 | orchestrator | 2026-01-05 01:53:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:33.100608 | orchestrator | 2026-01-05 01:53:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:33.100648 | orchestrator | 2026-01-05 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:36.150360 | orchestrator | 2026-01-05 01:53:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:36.153851 | orchestrator | 2026-01-05 01:53:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:36.153937 | orchestrator | 2026-01-05 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:39.197924 | orchestrator | 2026-01-05 01:53:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:39.200236 | orchestrator | 2026-01-05 01:53:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:39.200288 | orchestrator | 2026-01-05 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:42.251058 | orchestrator | 2026-01-05 01:53:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:42.253179 | orchestrator | 2026-01-05 01:53:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:42.253237 | orchestrator | 2026-01-05 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:45.291470 | orchestrator | 2026-01-05 01:53:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:45.293160 | orchestrator | 2026-01-05 01:53:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:45.293207 | orchestrator | 2026-01-05 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:48.348395 | orchestrator | 2026-01-05 01:53:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:48.350047 | orchestrator | 2026-01-05 01:53:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:48.350082 | orchestrator | 2026-01-05 01:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:51.400626 | orchestrator | 2026-01-05 01:53:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:51.401276 | orchestrator | 2026-01-05 01:53:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:51.401315 | orchestrator | 2026-01-05 01:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:54.454478 | orchestrator | 2026-01-05 01:53:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:54.455681 | orchestrator | 2026-01-05 01:53:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:54.455791 | orchestrator | 2026-01-05 01:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:53:57.504032 | orchestrator | 2026-01-05 01:53:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:53:57.505479 | orchestrator | 2026-01-05 01:53:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:53:57.505505 | orchestrator | 2026-01-05 01:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:00.552709 | orchestrator | 2026-01-05 01:54:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:00.555799 | orchestrator | 2026-01-05 01:54:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:00.555902 | orchestrator | 2026-01-05 01:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:03.611362 | orchestrator | 2026-01-05 01:54:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:03.613062 | orchestrator | 2026-01-05 01:54:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:03.613132 | orchestrator | 2026-01-05 01:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:06.655489 | orchestrator | 2026-01-05 01:54:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:06.658081 | orchestrator | 2026-01-05 01:54:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:06.658178 | orchestrator | 2026-01-05 01:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:09.708929 | orchestrator | 2026-01-05 01:54:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:09.709470 | orchestrator | 2026-01-05 01:54:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:09.709495 | orchestrator | 2026-01-05 01:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:12.757988 | orchestrator | 2026-01-05 01:54:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:12.758661 | orchestrator | 2026-01-05 01:54:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:12.758868 | orchestrator | 2026-01-05 01:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:15.810712 | orchestrator | 2026-01-05 01:54:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:15.812469 | orchestrator | 2026-01-05 01:54:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:15.812540 | orchestrator | 2026-01-05 01:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:18.860124 | orchestrator | 2026-01-05 01:54:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:18.862177 | orchestrator | 2026-01-05 01:54:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:18.862289 | orchestrator | 2026-01-05 01:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:21.902366 | orchestrator | 2026-01-05 01:54:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:21.904578 | orchestrator | 2026-01-05 01:54:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:21.904695 | orchestrator | 2026-01-05 01:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:24.957293 | orchestrator | 2026-01-05 01:54:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:24.959154 | orchestrator | 2026-01-05 01:54:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:24.959335 | orchestrator | 2026-01-05 01:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:28.011535 | orchestrator | 2026-01-05 01:54:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:28.013741 | orchestrator | 2026-01-05 01:54:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:28.013814 | orchestrator | 2026-01-05 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:31.069800 | orchestrator | 2026-01-05 01:54:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:31.072400 | orchestrator | 2026-01-05 01:54:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:31.072457 | orchestrator | 2026-01-05 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:34.115796 | orchestrator | 2026-01-05 01:54:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:34.117859 | orchestrator | 2026-01-05 01:54:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:34.117899 | orchestrator | 2026-01-05 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:37.168586 | orchestrator | 2026-01-05 01:54:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:37.169421 | orchestrator | 2026-01-05 01:54:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:37.169512 | orchestrator | 2026-01-05 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:40.212657 | orchestrator | 2026-01-05 01:54:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:40.214574 | orchestrator | 2026-01-05 01:54:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:40.214636 | orchestrator | 2026-01-05 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:43.266096 | orchestrator | 2026-01-05 01:54:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:43.268143 | orchestrator | 2026-01-05 01:54:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:43.268201 | orchestrator | 2026-01-05 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:46.312520 | orchestrator | 2026-01-05 01:54:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:46.314799 | orchestrator | 2026-01-05 01:54:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:46.314886 | orchestrator | 2026-01-05 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:49.368519 | orchestrator | 2026-01-05 01:54:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:49.371354 | orchestrator | 2026-01-05 01:54:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:49.371437 | orchestrator | 2026-01-05 01:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:52.425374 | orchestrator | 2026-01-05 01:54:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:52.427267 | orchestrator | 2026-01-05 01:54:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:52.427473 | orchestrator | 2026-01-05 01:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:55.474926 | orchestrator | 2026-01-05 01:54:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:55.477599 | orchestrator | 2026-01-05 01:54:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:55.477770 | orchestrator | 2026-01-05 01:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:54:58.530529 | orchestrator | 2026-01-05 01:54:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:54:58.533007 | orchestrator | 2026-01-05 01:54:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:54:58.533104 | orchestrator | 2026-01-05 01:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:01.578106 | orchestrator | 2026-01-05 01:55:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:01.579813 | orchestrator | 2026-01-05 01:55:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:01.579919 | orchestrator | 2026-01-05 01:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:04.633739 | orchestrator | 2026-01-05 01:55:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:04.635594 | orchestrator | 2026-01-05 01:55:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:04.635632 | orchestrator | 2026-01-05 01:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:07.683241 | orchestrator | 2026-01-05 01:55:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:07.684653 | orchestrator | 2026-01-05 01:55:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:07.685240 | orchestrator | 2026-01-05 01:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:10.732682 | orchestrator | 2026-01-05 01:55:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:10.735323 | orchestrator | 2026-01-05 01:55:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:10.735398 | orchestrator | 2026-01-05 01:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:13.779091 | orchestrator | 2026-01-05 01:55:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:13.779240 | orchestrator | 2026-01-05 01:55:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:13.779259 | orchestrator | 2026-01-05 01:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:16.824825 | orchestrator | 2026-01-05 01:55:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:16.827252 | orchestrator | 2026-01-05 01:55:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:16.827316 | orchestrator | 2026-01-05 01:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:19.873317 | orchestrator | 2026-01-05 01:55:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:19.877177 | orchestrator | 2026-01-05 01:55:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:19.877244 | orchestrator | 2026-01-05 01:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:22.921036 | orchestrator | 2026-01-05 01:55:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:22.923247 | orchestrator | 2026-01-05 01:55:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:22.923305 | orchestrator | 2026-01-05 01:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:25.966596 | orchestrator | 2026-01-05 01:55:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:25.968430 | orchestrator | 2026-01-05 01:55:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:25.968527 | orchestrator | 2026-01-05 01:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:29.019920 | orchestrator | 2026-01-05 01:55:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:29.020937 | orchestrator | 2026-01-05 01:55:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:29.021017 | orchestrator | 2026-01-05 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:32.069466 | orchestrator | 2026-01-05 01:55:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:32.070726 | orchestrator | 2026-01-05 01:55:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:32.070806 | orchestrator | 2026-01-05 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:35.129678 | orchestrator | 2026-01-05 01:55:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:35.131498 | orchestrator | 2026-01-05 01:55:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:35.131630 | orchestrator | 2026-01-05 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:38.183404 | orchestrator | 2026-01-05 01:55:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:38.185204 | orchestrator | 2026-01-05 01:55:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:38.185250 | orchestrator | 2026-01-05 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:41.236516 | orchestrator | 2026-01-05 01:55:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:41.238256 | orchestrator | 2026-01-05 01:55:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:41.238301 | orchestrator | 2026-01-05 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:44.287205 | orchestrator | 2026-01-05 01:55:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:44.288485 | orchestrator | 2026-01-05 01:55:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:44.288542 | orchestrator | 2026-01-05 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:47.337259 | orchestrator | 2026-01-05 01:55:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:47.340231 | orchestrator | 2026-01-05 01:55:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:47.340311 | orchestrator | 2026-01-05 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:50.383863 | orchestrator | 2026-01-05 01:55:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:50.386633 | orchestrator | 2026-01-05 01:55:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:50.386698 | orchestrator | 2026-01-05 01:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:53.433945 | orchestrator | 2026-01-05 01:55:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:53.437254 | orchestrator | 2026-01-05 01:55:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:53.437327 | orchestrator | 2026-01-05 01:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:56.484421 | orchestrator | 2026-01-05 01:55:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:56.485862 | orchestrator | 2026-01-05 01:55:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:56.485988 | orchestrator | 2026-01-05 01:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:55:59.537137 | orchestrator | 2026-01-05 01:55:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:55:59.538938 | orchestrator | 2026-01-05 01:55:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:55:59.539000 | orchestrator | 2026-01-05 01:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:02.588319 | orchestrator | 2026-01-05 01:56:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:02.590208 | orchestrator | 2026-01-05 01:56:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:02.590263 | orchestrator | 2026-01-05 01:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:05.633151 | orchestrator | 2026-01-05 01:56:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:05.635525 | orchestrator | 2026-01-05 01:56:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:05.635581 | orchestrator | 2026-01-05 01:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:08.682634 | orchestrator | 2026-01-05 01:56:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:08.684971 | orchestrator | 2026-01-05 01:56:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:08.685044 | orchestrator | 2026-01-05 01:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:11.728977 | orchestrator | 2026-01-05 01:56:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:11.731402 | orchestrator | 2026-01-05 01:56:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:11.731479 | orchestrator | 2026-01-05 01:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:14.787440 | orchestrator | 2026-01-05 01:56:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:14.789720 | orchestrator | 2026-01-05 01:56:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:14.789843 | orchestrator | 2026-01-05 01:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:17.840961 | orchestrator | 2026-01-05 01:56:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:17.842504 | orchestrator | 2026-01-05 01:56:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:17.842976 | orchestrator | 2026-01-05 01:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:20.894185 | orchestrator | 2026-01-05 01:56:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:20.896248 | orchestrator | 2026-01-05 01:56:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:20.896347 | orchestrator | 2026-01-05 01:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:23.951349 | orchestrator | 2026-01-05 01:56:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:23.953404 | orchestrator | 2026-01-05 01:56:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:23.953472 | orchestrator | 2026-01-05 01:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:27.000057 | orchestrator | 2026-01-05 01:56:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:27.002268 | orchestrator | 2026-01-05 01:56:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:27.002371 | orchestrator | 2026-01-05 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:30.047082 | orchestrator | 2026-01-05 01:56:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:30.049802 | orchestrator | 2026-01-05 01:56:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:30.049904 | orchestrator | 2026-01-05 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:33.092628 | orchestrator | 2026-01-05 01:56:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:33.094957 | orchestrator | 2026-01-05 01:56:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:33.095015 | orchestrator | 2026-01-05 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:36.149032 | orchestrator | 2026-01-05 01:56:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:36.152205 | orchestrator | 2026-01-05 01:56:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:36.152271 | orchestrator | 2026-01-05 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:39.201814 | orchestrator | 2026-01-05 01:56:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:39.203160 | orchestrator | 2026-01-05 01:56:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:39.203209 | orchestrator | 2026-01-05 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:42.251587 | orchestrator | 2026-01-05 01:56:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:42.253750 | orchestrator | 2026-01-05 01:56:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:42.253802 | orchestrator | 2026-01-05 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:45.308083 | orchestrator | 2026-01-05 01:56:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:45.309916 | orchestrator | 2026-01-05 01:56:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:45.310083 | orchestrator | 2026-01-05 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:48.366321 | orchestrator | 2026-01-05 01:56:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:48.368299 | orchestrator | 2026-01-05 01:56:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:48.368382 | orchestrator | 2026-01-05 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:51.416236 | orchestrator | 2026-01-05 01:56:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:51.419392 | orchestrator | 2026-01-05 01:56:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:51.419479 | orchestrator | 2026-01-05 01:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:54.470048 | orchestrator | 2026-01-05 01:56:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:54.471862 | orchestrator | 2026-01-05 01:56:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:54.471913 | orchestrator | 2026-01-05 01:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:56:57.522809 | orchestrator | 2026-01-05 01:56:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:56:57.527617 | orchestrator | 2026-01-05 01:56:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:56:57.527889 | orchestrator | 2026-01-05 01:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:00.573142 | orchestrator | 2026-01-05 01:57:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:00.573764 | orchestrator | 2026-01-05 01:57:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:00.574158 | orchestrator | 2026-01-05 01:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:03.621315 | orchestrator | 2026-01-05 01:57:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:03.623334 | orchestrator | 2026-01-05 01:57:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:03.623400 | orchestrator | 2026-01-05 01:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:06.671761 | orchestrator | 2026-01-05 01:57:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:06.673245 | orchestrator | 2026-01-05 01:57:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:06.673416 | orchestrator | 2026-01-05 01:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:09.722768 | orchestrator | 2026-01-05 01:57:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:09.724145 | orchestrator | 2026-01-05 01:57:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:09.724196 | orchestrator | 2026-01-05 01:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:12.767725 | orchestrator | 2026-01-05 01:57:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:12.769398 | orchestrator | 2026-01-05 01:57:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:12.769451 | orchestrator | 2026-01-05 01:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:15.817316 | orchestrator | 2026-01-05 01:57:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:15.818761 | orchestrator | 2026-01-05 01:57:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:15.818796 | orchestrator | 2026-01-05 01:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:18.865869 | orchestrator | 2026-01-05 01:57:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:18.866953 | orchestrator | 2026-01-05 01:57:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:18.867184 | orchestrator | 2026-01-05 01:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:21.915428 | orchestrator | 2026-01-05 01:57:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:21.917104 | orchestrator | 2026-01-05 01:57:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:21.917223 | orchestrator | 2026-01-05 01:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:24.966816 | orchestrator | 2026-01-05 01:57:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:24.968996 | orchestrator | 2026-01-05 01:57:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:24.969059 | orchestrator | 2026-01-05 01:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:28.014303 | orchestrator | 2026-01-05 01:57:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:28.015055 | orchestrator | 2026-01-05 01:57:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:28.015110 | orchestrator | 2026-01-05 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:31.065356 | orchestrator | 2026-01-05 01:57:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:31.066760 | orchestrator | 2026-01-05 01:57:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:31.066811 | orchestrator | 2026-01-05 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:34.111218 | orchestrator | 2026-01-05 01:57:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:34.112621 | orchestrator | 2026-01-05 01:57:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:34.112734 | orchestrator | 2026-01-05 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:37.162630 | orchestrator | 2026-01-05 01:57:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:37.164845 | orchestrator | 2026-01-05 01:57:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:37.164904 | orchestrator | 2026-01-05 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:40.215697 | orchestrator | 2026-01-05 01:57:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:40.216991 | orchestrator | 2026-01-05 01:57:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:40.217138 | orchestrator | 2026-01-05 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:43.262225 | orchestrator | 2026-01-05 01:57:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:43.264476 | orchestrator | 2026-01-05 01:57:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:43.265140 | orchestrator | 2026-01-05 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:46.308640 | orchestrator | 2026-01-05 01:57:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:46.311175 | orchestrator | 2026-01-05 01:57:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:46.311251 | orchestrator | 2026-01-05 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:49.363610 | orchestrator | 2026-01-05 01:57:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:49.365114 | orchestrator | 2026-01-05 01:57:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:49.365166 | orchestrator | 2026-01-05 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:52.412320 | orchestrator | 2026-01-05 01:57:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:52.412890 | orchestrator | 2026-01-05 01:57:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:52.413543 | orchestrator | 2026-01-05 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:55.474298 | orchestrator | 2026-01-05 01:57:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:55.477175 | orchestrator | 2026-01-05 01:57:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:55.477240 | orchestrator | 2026-01-05 01:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:57:58.527840 | orchestrator | 2026-01-05 01:57:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:57:58.529398 | orchestrator | 2026-01-05 01:57:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:57:58.529449 | orchestrator | 2026-01-05 01:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:01.576798 | orchestrator | 2026-01-05 01:58:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:01.580831 | orchestrator | 2026-01-05 01:58:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:01.580938 | orchestrator | 2026-01-05 01:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:04.633308 | orchestrator | 2026-01-05 01:58:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:04.634419 | orchestrator | 2026-01-05 01:58:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:04.634555 | orchestrator | 2026-01-05 01:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:07.687949 | orchestrator | 2026-01-05 01:58:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:07.690162 | orchestrator | 2026-01-05 01:58:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:07.690212 | orchestrator | 2026-01-05 01:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:10.738590 | orchestrator | 2026-01-05 01:58:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:10.740890 | orchestrator | 2026-01-05 01:58:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:10.740946 | orchestrator | 2026-01-05 01:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:13.790494 | orchestrator | 2026-01-05 01:58:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:13.792038 | orchestrator | 2026-01-05 01:58:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:13.792099 | orchestrator | 2026-01-05 01:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:16.842676 | orchestrator | 2026-01-05 01:58:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:16.845489 | orchestrator | 2026-01-05 01:58:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:16.845556 | orchestrator | 2026-01-05 01:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:19.894830 | orchestrator | 2026-01-05 01:58:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:19.897707 | orchestrator | 2026-01-05 01:58:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:19.897786 | orchestrator | 2026-01-05 01:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:22.949044 | orchestrator | 2026-01-05 01:58:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:22.951916 | orchestrator | 2026-01-05 01:58:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:22.952010 | orchestrator | 2026-01-05 01:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:25.998009 | orchestrator | 2026-01-05 01:58:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:26.000855 | orchestrator | 2026-01-05 01:58:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:26.000933 | orchestrator | 2026-01-05 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:29.051241 | orchestrator | 2026-01-05 01:58:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:29.055345 | orchestrator | 2026-01-05 01:58:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:29.055448 | orchestrator | 2026-01-05 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:32.104122 | orchestrator | 2026-01-05 01:58:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:32.107659 | orchestrator | 2026-01-05 01:58:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:32.107745 | orchestrator | 2026-01-05 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:35.150772 | orchestrator | 2026-01-05 01:58:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:35.152561 | orchestrator | 2026-01-05 01:58:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:35.152620 | orchestrator | 2026-01-05 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:38.199110 | orchestrator | 2026-01-05 01:58:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:38.201241 | orchestrator | 2026-01-05 01:58:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:38.201286 | orchestrator | 2026-01-05 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:41.262482 | orchestrator | 2026-01-05 01:58:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:41.265109 | orchestrator | 2026-01-05 01:58:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:41.265196 | orchestrator | 2026-01-05 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:44.305586 | orchestrator | 2026-01-05 01:58:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:44.306400 | orchestrator | 2026-01-05 01:58:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:44.306489 | orchestrator | 2026-01-05 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:47.349848 | orchestrator | 2026-01-05 01:58:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:47.350666 | orchestrator | 2026-01-05 01:58:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:47.350715 | orchestrator | 2026-01-05 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:50.406791 | orchestrator | 2026-01-05 01:58:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:50.409336 | orchestrator | 2026-01-05 01:58:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:50.409575 | orchestrator | 2026-01-05 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:53.457128 | orchestrator | 2026-01-05 01:58:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:53.459424 | orchestrator | 2026-01-05 01:58:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:53.459498 | orchestrator | 2026-01-05 01:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:56.513507 | orchestrator | 2026-01-05 01:58:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:56.515977 | orchestrator | 2026-01-05 01:58:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:56.516085 | orchestrator | 2026-01-05 01:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:58:59.564007 | orchestrator | 2026-01-05 01:58:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:58:59.565455 | orchestrator | 2026-01-05 01:58:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:58:59.565513 | orchestrator | 2026-01-05 01:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:02.610841 | orchestrator | 2026-01-05 01:59:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:02.613511 | orchestrator | 2026-01-05 01:59:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:02.613649 | orchestrator | 2026-01-05 01:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:05.667523 | orchestrator | 2026-01-05 01:59:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:05.670221 | orchestrator | 2026-01-05 01:59:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:05.670344 | orchestrator | 2026-01-05 01:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:08.721167 | orchestrator | 2026-01-05 01:59:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:08.722447 | orchestrator | 2026-01-05 01:59:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:08.722504 | orchestrator | 2026-01-05 01:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:11.771961 | orchestrator | 2026-01-05 01:59:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:11.773344 | orchestrator | 2026-01-05 01:59:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:11.773404 | orchestrator | 2026-01-05 01:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:14.822415 | orchestrator | 2026-01-05 01:59:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:14.823929 | orchestrator | 2026-01-05 01:59:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:14.823981 | orchestrator | 2026-01-05 01:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:17.864128 | orchestrator | 2026-01-05 01:59:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:17.866121 | orchestrator | 2026-01-05 01:59:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:17.866179 | orchestrator | 2026-01-05 01:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:20.915185 | orchestrator | 2026-01-05 01:59:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:20.917680 | orchestrator | 2026-01-05 01:59:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:20.917782 | orchestrator | 2026-01-05 01:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:23.968597 | orchestrator | 2026-01-05 01:59:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:23.970441 | orchestrator | 2026-01-05 01:59:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:23.970484 | orchestrator | 2026-01-05 01:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:27.018956 | orchestrator | 2026-01-05 01:59:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:27.021617 | orchestrator | 2026-01-05 01:59:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:27.021714 | orchestrator | 2026-01-05 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:30.074889 | orchestrator | 2026-01-05 01:59:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:30.076204 | orchestrator | 2026-01-05 01:59:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:30.076323 | orchestrator | 2026-01-05 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:33.122449 | orchestrator | 2026-01-05 01:59:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:33.124368 | orchestrator | 2026-01-05 01:59:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:33.124483 | orchestrator | 2026-01-05 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:36.171689 | orchestrator | 2026-01-05 01:59:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:36.173487 | orchestrator | 2026-01-05 01:59:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:36.173569 | orchestrator | 2026-01-05 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:39.223779 | orchestrator | 2026-01-05 01:59:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:39.225416 | orchestrator | 2026-01-05 01:59:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:39.225469 | orchestrator | 2026-01-05 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:42.279933 | orchestrator | 2026-01-05 01:59:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:42.281865 | orchestrator | 2026-01-05 01:59:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:42.282423 | orchestrator | 2026-01-05 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:45.329995 | orchestrator | 2026-01-05 01:59:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:45.332377 | orchestrator | 2026-01-05 01:59:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:45.332446 | orchestrator | 2026-01-05 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:48.377904 | orchestrator | 2026-01-05 01:59:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:48.380359 | orchestrator | 2026-01-05 01:59:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:48.380467 | orchestrator | 2026-01-05 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:51.429196 | orchestrator | 2026-01-05 01:59:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:51.430929 | orchestrator | 2026-01-05 01:59:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:51.431256 | orchestrator | 2026-01-05 01:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:54.481433 | orchestrator | 2026-01-05 01:59:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:54.483921 | orchestrator | 2026-01-05 01:59:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:54.484037 | orchestrator | 2026-01-05 01:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 01:59:57.532949 | orchestrator | 2026-01-05 01:59:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 01:59:57.534153 | orchestrator | 2026-01-05 01:59:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 01:59:57.534189 | orchestrator | 2026-01-05 01:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:00.574570 | orchestrator | 2026-01-05 02:00:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:00.576115 | orchestrator | 2026-01-05 02:00:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:00.576269 | orchestrator | 2026-01-05 02:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:03.625317 | orchestrator | 2026-01-05 02:00:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:03.627632 | orchestrator | 2026-01-05 02:00:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:03.627712 | orchestrator | 2026-01-05 02:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:06.676907 | orchestrator | 2026-01-05 02:00:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:06.678297 | orchestrator | 2026-01-05 02:00:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:06.678361 | orchestrator | 2026-01-05 02:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:09.725345 | orchestrator | 2026-01-05 02:00:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:09.727130 | orchestrator | 2026-01-05 02:00:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:09.727187 | orchestrator | 2026-01-05 02:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:12.786517 | orchestrator | 2026-01-05 02:00:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:12.788890 | orchestrator | 2026-01-05 02:00:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:12.788968 | orchestrator | 2026-01-05 02:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:15.841026 | orchestrator | 2026-01-05 02:00:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:15.842870 | orchestrator | 2026-01-05 02:00:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:15.842929 | orchestrator | 2026-01-05 02:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:18.890856 | orchestrator | 2026-01-05 02:00:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:18.893639 | orchestrator | 2026-01-05 02:00:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:18.893699 | orchestrator | 2026-01-05 02:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:21.941897 | orchestrator | 2026-01-05 02:00:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:21.943558 | orchestrator | 2026-01-05 02:00:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:21.943601 | orchestrator | 2026-01-05 02:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:24.992045 | orchestrator | 2026-01-05 02:00:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:24.993860 | orchestrator | 2026-01-05 02:00:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:24.993935 | orchestrator | 2026-01-05 02:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:28.035636 | orchestrator | 2026-01-05 02:00:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:28.037069 | orchestrator | 2026-01-05 02:00:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:28.037148 | orchestrator | 2026-01-05 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:31.083815 | orchestrator | 2026-01-05 02:00:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:31.086616 | orchestrator | 2026-01-05 02:00:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:31.086715 | orchestrator | 2026-01-05 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:34.131888 | orchestrator | 2026-01-05 02:00:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:34.136355 | orchestrator | 2026-01-05 02:00:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:34.136757 | orchestrator | 2026-01-05 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:37.186308 | orchestrator | 2026-01-05 02:00:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:37.190692 | orchestrator | 2026-01-05 02:00:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:37.190756 | orchestrator | 2026-01-05 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:40.243217 | orchestrator | 2026-01-05 02:00:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:40.244086 | orchestrator | 2026-01-05 02:00:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:40.244193 | orchestrator | 2026-01-05 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:43.294657 | orchestrator | 2026-01-05 02:00:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:43.295449 | orchestrator | 2026-01-05 02:00:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:43.295487 | orchestrator | 2026-01-05 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:46.346134 | orchestrator | 2026-01-05 02:00:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:46.348507 | orchestrator | 2026-01-05 02:00:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:46.348571 | orchestrator | 2026-01-05 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:49.394277 | orchestrator | 2026-01-05 02:00:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:49.395532 | orchestrator | 2026-01-05 02:00:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:49.395560 | orchestrator | 2026-01-05 02:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:52.443866 | orchestrator | 2026-01-05 02:00:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:52.445322 | orchestrator | 2026-01-05 02:00:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:52.445380 | orchestrator | 2026-01-05 02:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:55.495829 | orchestrator | 2026-01-05 02:00:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:55.498660 | orchestrator | 2026-01-05 02:00:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:55.498746 | orchestrator | 2026-01-05 02:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:00:58.543917 | orchestrator | 2026-01-05 02:00:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:00:58.547604 | orchestrator | 2026-01-05 02:00:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:00:58.547698 | orchestrator | 2026-01-05 02:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:01.594450 | orchestrator | 2026-01-05 02:01:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:01.596016 | orchestrator | 2026-01-05 02:01:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:01.596079 | orchestrator | 2026-01-05 02:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:04.642858 | orchestrator | 2026-01-05 02:01:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:04.644948 | orchestrator | 2026-01-05 02:01:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:04.645005 | orchestrator | 2026-01-05 02:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:07.694253 | orchestrator | 2026-01-05 02:01:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:07.695855 | orchestrator | 2026-01-05 02:01:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:07.695938 | orchestrator | 2026-01-05 02:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:10.736536 | orchestrator | 2026-01-05 02:01:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:10.738287 | orchestrator | 2026-01-05 02:01:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:10.738327 | orchestrator | 2026-01-05 02:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:13.789616 | orchestrator | 2026-01-05 02:01:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:13.791734 | orchestrator | 2026-01-05 02:01:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:13.791790 | orchestrator | 2026-01-05 02:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:16.839527 | orchestrator | 2026-01-05 02:01:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:16.842769 | orchestrator | 2026-01-05 02:01:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:16.842833 | orchestrator | 2026-01-05 02:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:19.884945 | orchestrator | 2026-01-05 02:01:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:19.887059 | orchestrator | 2026-01-05 02:01:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:19.887194 | orchestrator | 2026-01-05 02:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:22.932547 | orchestrator | 2026-01-05 02:01:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:22.934285 | orchestrator | 2026-01-05 02:01:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:22.934336 | orchestrator | 2026-01-05 02:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:25.977235 | orchestrator | 2026-01-05 02:01:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:25.978368 | orchestrator | 2026-01-05 02:01:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:25.978413 | orchestrator | 2026-01-05 02:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:29.025188 | orchestrator | 2026-01-05 02:01:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:29.025584 | orchestrator | 2026-01-05 02:01:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:29.034707 | orchestrator | 2026-01-05 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:32.074406 | orchestrator | 2026-01-05 02:01:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:32.075445 | orchestrator | 2026-01-05 02:01:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:32.075487 | orchestrator | 2026-01-05 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:35.120303 | orchestrator | 2026-01-05 02:01:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:35.123385 | orchestrator | 2026-01-05 02:01:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:35.123467 | orchestrator | 2026-01-05 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:38.174916 | orchestrator | 2026-01-05 02:01:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:38.176597 | orchestrator | 2026-01-05 02:01:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:38.176664 | orchestrator | 2026-01-05 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:41.229501 | orchestrator | 2026-01-05 02:01:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:41.232318 | orchestrator | 2026-01-05 02:01:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:41.232362 | orchestrator | 2026-01-05 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:44.290726 | orchestrator | 2026-01-05 02:01:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:44.293483 | orchestrator | 2026-01-05 02:01:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:44.293644 | orchestrator | 2026-01-05 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:47.359136 | orchestrator | 2026-01-05 02:01:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:47.360883 | orchestrator | 2026-01-05 02:01:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:47.360954 | orchestrator | 2026-01-05 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:50.403600 | orchestrator | 2026-01-05 02:01:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:50.406782 | orchestrator | 2026-01-05 02:01:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:50.406835 | orchestrator | 2026-01-05 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:53.460943 | orchestrator | 2026-01-05 02:01:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:53.462477 | orchestrator | 2026-01-05 02:01:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:53.462569 | orchestrator | 2026-01-05 02:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:56.518551 | orchestrator | 2026-01-05 02:01:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:56.520914 | orchestrator | 2026-01-05 02:01:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:56.521104 | orchestrator | 2026-01-05 02:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:01:59.572412 | orchestrator | 2026-01-05 02:01:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:01:59.574103 | orchestrator | 2026-01-05 02:01:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:01:59.574159 | orchestrator | 2026-01-05 02:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:02.628972 | orchestrator | 2026-01-05 02:02:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:02.631471 | orchestrator | 2026-01-05 02:02:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:02.631531 | orchestrator | 2026-01-05 02:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:05.688444 | orchestrator | 2026-01-05 02:02:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:05.690388 | orchestrator | 2026-01-05 02:02:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:05.690469 | orchestrator | 2026-01-05 02:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:08.738769 | orchestrator | 2026-01-05 02:02:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:08.742737 | orchestrator | 2026-01-05 02:02:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:08.743021 | orchestrator | 2026-01-05 02:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:11.794874 | orchestrator | 2026-01-05 02:02:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:11.797218 | orchestrator | 2026-01-05 02:02:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:11.797289 | orchestrator | 2026-01-05 02:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:14.847464 | orchestrator | 2026-01-05 02:02:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:14.850854 | orchestrator | 2026-01-05 02:02:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:14.850914 | orchestrator | 2026-01-05 02:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:17.906419 | orchestrator | 2026-01-05 02:02:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:17.908441 | orchestrator | 2026-01-05 02:02:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:17.908499 | orchestrator | 2026-01-05 02:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:20.964864 | orchestrator | 2026-01-05 02:02:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:20.966330 | orchestrator | 2026-01-05 02:02:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:20.966418 | orchestrator | 2026-01-05 02:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:24.020109 | orchestrator | 2026-01-05 02:02:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:24.023719 | orchestrator | 2026-01-05 02:02:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:24.023802 | orchestrator | 2026-01-05 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:27.069813 | orchestrator | 2026-01-05 02:02:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:27.072324 | orchestrator | 2026-01-05 02:02:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:27.072419 | orchestrator | 2026-01-05 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:30.115880 | orchestrator | 2026-01-05 02:02:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:30.118350 | orchestrator | 2026-01-05 02:02:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:30.118416 | orchestrator | 2026-01-05 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:33.158653 | orchestrator | 2026-01-05 02:02:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:33.159483 | orchestrator | 2026-01-05 02:02:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:33.159554 | orchestrator | 2026-01-05 02:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:36.212991 | orchestrator | 2026-01-05 02:02:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:36.222843 | orchestrator | 2026-01-05 02:02:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:36.222960 | orchestrator | 2026-01-05 02:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:39.270989 | orchestrator | 2026-01-05 02:02:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:39.273311 | orchestrator | 2026-01-05 02:02:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:39.273510 | orchestrator | 2026-01-05 02:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:42.314376 | orchestrator | 2026-01-05 02:02:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:42.316986 | orchestrator | 2026-01-05 02:02:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:42.317171 | orchestrator | 2026-01-05 02:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:45.372960 | orchestrator | 2026-01-05 02:02:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:45.374955 | orchestrator | 2026-01-05 02:02:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:45.375013 | orchestrator | 2026-01-05 02:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:48.426245 | orchestrator | 2026-01-05 02:02:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:48.428926 | orchestrator | 2026-01-05 02:02:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:48.428996 | orchestrator | 2026-01-05 02:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:51.474896 | orchestrator | 2026-01-05 02:02:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:51.476849 | orchestrator | 2026-01-05 02:02:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:51.478076 | orchestrator | 2026-01-05 02:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:54.514186 | orchestrator | 2026-01-05 02:02:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:54.514314 | orchestrator | 2026-01-05 02:02:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:54.514330 | orchestrator | 2026-01-05 02:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:02:57.560211 | orchestrator | 2026-01-05 02:02:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:02:57.562309 | orchestrator | 2026-01-05 02:02:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:02:57.562368 | orchestrator | 2026-01-05 02:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:00.617195 | orchestrator | 2026-01-05 02:03:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:00.617985 | orchestrator | 2026-01-05 02:03:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:00.618066 | orchestrator | 2026-01-05 02:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:03.669188 | orchestrator | 2026-01-05 02:03:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:03.671435 | orchestrator | 2026-01-05 02:03:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:03.671616 | orchestrator | 2026-01-05 02:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:06.724264 | orchestrator | 2026-01-05 02:03:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:06.726131 | orchestrator | 2026-01-05 02:03:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:06.726183 | orchestrator | 2026-01-05 02:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:09.768059 | orchestrator | 2026-01-05 02:03:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:09.770091 | orchestrator | 2026-01-05 02:03:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:09.770130 | orchestrator | 2026-01-05 02:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:12.809976 | orchestrator | 2026-01-05 02:03:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:12.811504 | orchestrator | 2026-01-05 02:03:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:12.811581 | orchestrator | 2026-01-05 02:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:15.854498 | orchestrator | 2026-01-05 02:03:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:15.855412 | orchestrator | 2026-01-05 02:03:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:15.855466 | orchestrator | 2026-01-05 02:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:18.900167 | orchestrator | 2026-01-05 02:03:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:18.902399 | orchestrator | 2026-01-05 02:03:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:18.902517 | orchestrator | 2026-01-05 02:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:21.948451 | orchestrator | 2026-01-05 02:03:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:21.949181 | orchestrator | 2026-01-05 02:03:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:21.949213 | orchestrator | 2026-01-05 02:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:24.996090 | orchestrator | 2026-01-05 02:03:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:24.998216 | orchestrator | 2026-01-05 02:03:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:24.998269 | orchestrator | 2026-01-05 02:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:28.053427 | orchestrator | 2026-01-05 02:03:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:28.054212 | orchestrator | 2026-01-05 02:03:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:28.054661 | orchestrator | 2026-01-05 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:31.096107 | orchestrator | 2026-01-05 02:03:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:31.098729 | orchestrator | 2026-01-05 02:03:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:31.098819 | orchestrator | 2026-01-05 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:34.135350 | orchestrator | 2026-01-05 02:03:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:34.136482 | orchestrator | 2026-01-05 02:03:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:34.136625 | orchestrator | 2026-01-05 02:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:37.188554 | orchestrator | 2026-01-05 02:03:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:37.190269 | orchestrator | 2026-01-05 02:03:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:37.190332 | orchestrator | 2026-01-05 02:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:40.237725 | orchestrator | 2026-01-05 02:03:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:40.239912 | orchestrator | 2026-01-05 02:03:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:40.239979 | orchestrator | 2026-01-05 02:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:43.281285 | orchestrator | 2026-01-05 02:03:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:43.283326 | orchestrator | 2026-01-05 02:03:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:43.283395 | orchestrator | 2026-01-05 02:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:46.335256 | orchestrator | 2026-01-05 02:03:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:46.337569 | orchestrator | 2026-01-05 02:03:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:46.337628 | orchestrator | 2026-01-05 02:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:49.383257 | orchestrator | 2026-01-05 02:03:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:49.383459 | orchestrator | 2026-01-05 02:03:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:49.383480 | orchestrator | 2026-01-05 02:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:52.428993 | orchestrator | 2026-01-05 02:03:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:52.431431 | orchestrator | 2026-01-05 02:03:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:52.431619 | orchestrator | 2026-01-05 02:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:55.478382 | orchestrator | 2026-01-05 02:03:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:55.482274 | orchestrator | 2026-01-05 02:03:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:55.482389 | orchestrator | 2026-01-05 02:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:03:58.526488 | orchestrator | 2026-01-05 02:03:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:03:58.528615 | orchestrator | 2026-01-05 02:03:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:03:58.528703 | orchestrator | 2026-01-05 02:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:01.570481 | orchestrator | 2026-01-05 02:04:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:01.571590 | orchestrator | 2026-01-05 02:04:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:01.571654 | orchestrator | 2026-01-05 02:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:04.619787 | orchestrator | 2026-01-05 02:04:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:04.623409 | orchestrator | 2026-01-05 02:04:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:04.623548 | orchestrator | 2026-01-05 02:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:07.680314 | orchestrator | 2026-01-05 02:04:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:07.682318 | orchestrator | 2026-01-05 02:04:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:07.682459 | orchestrator | 2026-01-05 02:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:10.734593 | orchestrator | 2026-01-05 02:04:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:10.736350 | orchestrator | 2026-01-05 02:04:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:10.736449 | orchestrator | 2026-01-05 02:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:13.788702 | orchestrator | 2026-01-05 02:04:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:13.790460 | orchestrator | 2026-01-05 02:04:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:13.790527 | orchestrator | 2026-01-05 02:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:16.840670 | orchestrator | 2026-01-05 02:04:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:17.019636 | orchestrator | 2026-01-05 02:04:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:17.019777 | orchestrator | 2026-01-05 02:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:19.889928 | orchestrator | 2026-01-05 02:04:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:19.891305 | orchestrator | 2026-01-05 02:04:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:19.891361 | orchestrator | 2026-01-05 02:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:22.937618 | orchestrator | 2026-01-05 02:04:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:22.939615 | orchestrator | 2026-01-05 02:04:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:22.939827 | orchestrator | 2026-01-05 02:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:25.984481 | orchestrator | 2026-01-05 02:04:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:25.986600 | orchestrator | 2026-01-05 02:04:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:25.986694 | orchestrator | 2026-01-05 02:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:29.031322 | orchestrator | 2026-01-05 02:04:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:29.032705 | orchestrator | 2026-01-05 02:04:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:29.032826 | orchestrator | 2026-01-05 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:32.084267 | orchestrator | 2026-01-05 02:04:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:32.087232 | orchestrator | 2026-01-05 02:04:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:32.087282 | orchestrator | 2026-01-05 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:35.134871 | orchestrator | 2026-01-05 02:04:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:35.135905 | orchestrator | 2026-01-05 02:04:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:35.135944 | orchestrator | 2026-01-05 02:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:38.185423 | orchestrator | 2026-01-05 02:04:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:38.188230 | orchestrator | 2026-01-05 02:04:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:38.188691 | orchestrator | 2026-01-05 02:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:41.232604 | orchestrator | 2026-01-05 02:04:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:41.233901 | orchestrator | 2026-01-05 02:04:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:41.233946 | orchestrator | 2026-01-05 02:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:44.286248 | orchestrator | 2026-01-05 02:04:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:44.289352 | orchestrator | 2026-01-05 02:04:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:44.289425 | orchestrator | 2026-01-05 02:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:47.344359 | orchestrator | 2026-01-05 02:04:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:47.345641 | orchestrator | 2026-01-05 02:04:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:47.345749 | orchestrator | 2026-01-05 02:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:50.399604 | orchestrator | 2026-01-05 02:04:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:50.401901 | orchestrator | 2026-01-05 02:04:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:50.402136 | orchestrator | 2026-01-05 02:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:53.455426 | orchestrator | 2026-01-05 02:04:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:53.458618 | orchestrator | 2026-01-05 02:04:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:53.458710 | orchestrator | 2026-01-05 02:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:56.510134 | orchestrator | 2026-01-05 02:04:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:56.511399 | orchestrator | 2026-01-05 02:04:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:56.511443 | orchestrator | 2026-01-05 02:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:04:59.559973 | orchestrator | 2026-01-05 02:04:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:04:59.561626 | orchestrator | 2026-01-05 02:04:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:04:59.561648 | orchestrator | 2026-01-05 02:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:02.608925 | orchestrator | 2026-01-05 02:05:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:02.610765 | orchestrator | 2026-01-05 02:05:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:02.610861 | orchestrator | 2026-01-05 02:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:05.662872 | orchestrator | 2026-01-05 02:05:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:05.665454 | orchestrator | 2026-01-05 02:05:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:05.665569 | orchestrator | 2026-01-05 02:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:08.718919 | orchestrator | 2026-01-05 02:05:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:08.721988 | orchestrator | 2026-01-05 02:05:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:08.722124 | orchestrator | 2026-01-05 02:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:11.765753 | orchestrator | 2026-01-05 02:05:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:11.766288 | orchestrator | 2026-01-05 02:05:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:11.766384 | orchestrator | 2026-01-05 02:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:14.813873 | orchestrator | 2026-01-05 02:05:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:14.815509 | orchestrator | 2026-01-05 02:05:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:14.815571 | orchestrator | 2026-01-05 02:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:17.864219 | orchestrator | 2026-01-05 02:05:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:17.865729 | orchestrator | 2026-01-05 02:05:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:17.865770 | orchestrator | 2026-01-05 02:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:20.912460 | orchestrator | 2026-01-05 02:05:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:20.914774 | orchestrator | 2026-01-05 02:05:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:20.914890 | orchestrator | 2026-01-05 02:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:23.965324 | orchestrator | 2026-01-05 02:05:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:23.966984 | orchestrator | 2026-01-05 02:05:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:23.967050 | orchestrator | 2026-01-05 02:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:27.022186 | orchestrator | 2026-01-05 02:05:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:27.024076 | orchestrator | 2026-01-05 02:05:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:27.024148 | orchestrator | 2026-01-05 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:30.080604 | orchestrator | 2026-01-05 02:05:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:30.082718 | orchestrator | 2026-01-05 02:05:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:30.082860 | orchestrator | 2026-01-05 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:33.131204 | orchestrator | 2026-01-05 02:05:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:33.134208 | orchestrator | 2026-01-05 02:05:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:33.134290 | orchestrator | 2026-01-05 02:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:36.183788 | orchestrator | 2026-01-05 02:05:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:36.184751 | orchestrator | 2026-01-05 02:05:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:36.184842 | orchestrator | 2026-01-05 02:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:39.235028 | orchestrator | 2026-01-05 02:05:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:39.236432 | orchestrator | 2026-01-05 02:05:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:39.236495 | orchestrator | 2026-01-05 02:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:42.283047 | orchestrator | 2026-01-05 02:05:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:42.285761 | orchestrator | 2026-01-05 02:05:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:42.285935 | orchestrator | 2026-01-05 02:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:45.339326 | orchestrator | 2026-01-05 02:05:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:45.341650 | orchestrator | 2026-01-05 02:05:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:45.341763 | orchestrator | 2026-01-05 02:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:48.387746 | orchestrator | 2026-01-05 02:05:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:48.388651 | orchestrator | 2026-01-05 02:05:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:48.388798 | orchestrator | 2026-01-05 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:51.433940 | orchestrator | 2026-01-05 02:05:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:51.436088 | orchestrator | 2026-01-05 02:05:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:51.436242 | orchestrator | 2026-01-05 02:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:54.486825 | orchestrator | 2026-01-05 02:05:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:54.489301 | orchestrator | 2026-01-05 02:05:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:54.489387 | orchestrator | 2026-01-05 02:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:05:57.544348 | orchestrator | 2026-01-05 02:05:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:05:57.547212 | orchestrator | 2026-01-05 02:05:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:05:57.547292 | orchestrator | 2026-01-05 02:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:00.596360 | orchestrator | 2026-01-05 02:06:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:00.598193 | orchestrator | 2026-01-05 02:06:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:00.598240 | orchestrator | 2026-01-05 02:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:03.641351 | orchestrator | 2026-01-05 02:06:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:03.642333 | orchestrator | 2026-01-05 02:06:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:03.642479 | orchestrator | 2026-01-05 02:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:06.691876 | orchestrator | 2026-01-05 02:06:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:06.693340 | orchestrator | 2026-01-05 02:06:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:06.693485 | orchestrator | 2026-01-05 02:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:09.741240 | orchestrator | 2026-01-05 02:06:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:09.742477 | orchestrator | 2026-01-05 02:06:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:09.742710 | orchestrator | 2026-01-05 02:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:12.793448 | orchestrator | 2026-01-05 02:06:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:12.794853 | orchestrator | 2026-01-05 02:06:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:12.794920 | orchestrator | 2026-01-05 02:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:15.844890 | orchestrator | 2026-01-05 02:06:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:15.847377 | orchestrator | 2026-01-05 02:06:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:15.847414 | orchestrator | 2026-01-05 02:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:18.896277 | orchestrator | 2026-01-05 02:06:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:18.898859 | orchestrator | 2026-01-05 02:06:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:18.898955 | orchestrator | 2026-01-05 02:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:21.942942 | orchestrator | 2026-01-05 02:06:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:21.945123 | orchestrator | 2026-01-05 02:06:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:21.945307 | orchestrator | 2026-01-05 02:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:24.992250 | orchestrator | 2026-01-05 02:06:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:24.994256 | orchestrator | 2026-01-05 02:06:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:24.994365 | orchestrator | 2026-01-05 02:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:28.044876 | orchestrator | 2026-01-05 02:06:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:28.046348 | orchestrator | 2026-01-05 02:06:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:28.046427 | orchestrator | 2026-01-05 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:31.093632 | orchestrator | 2026-01-05 02:06:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:31.094627 | orchestrator | 2026-01-05 02:06:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:31.094682 | orchestrator | 2026-01-05 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:34.144114 | orchestrator | 2026-01-05 02:06:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:34.145747 | orchestrator | 2026-01-05 02:06:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:34.145800 | orchestrator | 2026-01-05 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:37.202222 | orchestrator | 2026-01-05 02:06:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:37.204845 | orchestrator | 2026-01-05 02:06:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:37.204900 | orchestrator | 2026-01-05 02:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:40.244991 | orchestrator | 2026-01-05 02:06:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:40.246623 | orchestrator | 2026-01-05 02:06:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:40.246679 | orchestrator | 2026-01-05 02:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:43.293810 | orchestrator | 2026-01-05 02:06:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:43.295324 | orchestrator | 2026-01-05 02:06:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:43.295363 | orchestrator | 2026-01-05 02:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:46.337388 | orchestrator | 2026-01-05 02:06:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:46.339041 | orchestrator | 2026-01-05 02:06:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:46.339094 | orchestrator | 2026-01-05 02:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:49.387415 | orchestrator | 2026-01-05 02:06:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:49.388218 | orchestrator | 2026-01-05 02:06:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:49.388245 | orchestrator | 2026-01-05 02:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:52.438893 | orchestrator | 2026-01-05 02:06:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:52.440585 | orchestrator | 2026-01-05 02:06:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:52.440754 | orchestrator | 2026-01-05 02:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:55.491732 | orchestrator | 2026-01-05 02:06:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:55.493600 | orchestrator | 2026-01-05 02:06:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:55.493837 | orchestrator | 2026-01-05 02:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:06:58.549808 | orchestrator | 2026-01-05 02:06:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:06:58.551942 | orchestrator | 2026-01-05 02:06:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:06:58.551998 | orchestrator | 2026-01-05 02:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:01.599816 | orchestrator | 2026-01-05 02:07:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:01.601425 | orchestrator | 2026-01-05 02:07:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:01.601485 | orchestrator | 2026-01-05 02:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:04.643580 | orchestrator | 2026-01-05 02:07:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:04.645121 | orchestrator | 2026-01-05 02:07:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:04.645169 | orchestrator | 2026-01-05 02:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:07.694338 | orchestrator | 2026-01-05 02:07:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:07.695579 | orchestrator | 2026-01-05 02:07:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:07.695641 | orchestrator | 2026-01-05 02:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:10.752216 | orchestrator | 2026-01-05 02:07:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:10.753420 | orchestrator | 2026-01-05 02:07:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:10.753466 | orchestrator | 2026-01-05 02:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:13.803412 | orchestrator | 2026-01-05 02:07:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:13.805720 | orchestrator | 2026-01-05 02:07:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:13.805794 | orchestrator | 2026-01-05 02:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:16.853340 | orchestrator | 2026-01-05 02:07:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:16.855700 | orchestrator | 2026-01-05 02:07:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:16.855767 | orchestrator | 2026-01-05 02:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:19.901408 | orchestrator | 2026-01-05 02:07:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:19.903625 | orchestrator | 2026-01-05 02:07:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:19.903681 | orchestrator | 2026-01-05 02:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:22.955549 | orchestrator | 2026-01-05 02:07:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:22.958053 | orchestrator | 2026-01-05 02:07:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:22.958129 | orchestrator | 2026-01-05 02:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:26.009338 | orchestrator | 2026-01-05 02:07:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:26.013852 | orchestrator | 2026-01-05 02:07:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:26.013940 | orchestrator | 2026-01-05 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:29.063225 | orchestrator | 2026-01-05 02:07:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:29.063323 | orchestrator | 2026-01-05 02:07:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:29.063332 | orchestrator | 2026-01-05 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:32.106658 | orchestrator | 2026-01-05 02:07:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:32.109002 | orchestrator | 2026-01-05 02:07:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:32.109072 | orchestrator | 2026-01-05 02:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:35.149802 | orchestrator | 2026-01-05 02:07:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:35.150691 | orchestrator | 2026-01-05 02:07:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:35.150739 | orchestrator | 2026-01-05 02:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:38.205547 | orchestrator | 2026-01-05 02:07:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:38.208108 | orchestrator | 2026-01-05 02:07:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:38.208185 | orchestrator | 2026-01-05 02:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:41.257506 | orchestrator | 2026-01-05 02:07:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:41.258889 | orchestrator | 2026-01-05 02:07:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:41.258940 | orchestrator | 2026-01-05 02:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:44.313532 | orchestrator | 2026-01-05 02:07:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:44.314918 | orchestrator | 2026-01-05 02:07:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:44.315037 | orchestrator | 2026-01-05 02:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:47.368725 | orchestrator | 2026-01-05 02:07:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:47.369932 | orchestrator | 2026-01-05 02:07:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:47.369977 | orchestrator | 2026-01-05 02:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:50.412942 | orchestrator | 2026-01-05 02:07:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:50.413897 | orchestrator | 2026-01-05 02:07:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:50.414010 | orchestrator | 2026-01-05 02:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:53.464761 | orchestrator | 2026-01-05 02:07:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:53.466321 | orchestrator | 2026-01-05 02:07:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:53.466385 | orchestrator | 2026-01-05 02:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:56.512139 | orchestrator | 2026-01-05 02:07:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:56.513965 | orchestrator | 2026-01-05 02:07:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:56.514127 | orchestrator | 2026-01-05 02:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:07:59.559123 | orchestrator | 2026-01-05 02:07:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:07:59.561397 | orchestrator | 2026-01-05 02:07:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:07:59.561479 | orchestrator | 2026-01-05 02:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:02.608554 | orchestrator | 2026-01-05 02:08:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:02.610389 | orchestrator | 2026-01-05 02:08:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:02.610496 | orchestrator | 2026-01-05 02:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:05.658706 | orchestrator | 2026-01-05 02:08:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:05.660190 | orchestrator | 2026-01-05 02:08:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:05.660278 | orchestrator | 2026-01-05 02:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:08.709930 | orchestrator | 2026-01-05 02:08:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:08.710532 | orchestrator | 2026-01-05 02:08:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:08.710566 | orchestrator | 2026-01-05 02:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:11.755875 | orchestrator | 2026-01-05 02:08:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:11.757695 | orchestrator | 2026-01-05 02:08:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:11.757834 | orchestrator | 2026-01-05 02:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:14.805025 | orchestrator | 2026-01-05 02:08:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:14.808796 | orchestrator | 2026-01-05 02:08:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:14.808887 | orchestrator | 2026-01-05 02:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:17.860703 | orchestrator | 2026-01-05 02:08:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:17.862669 | orchestrator | 2026-01-05 02:08:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:17.862719 | orchestrator | 2026-01-05 02:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:20.912575 | orchestrator | 2026-01-05 02:08:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:20.914649 | orchestrator | 2026-01-05 02:08:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:20.914705 | orchestrator | 2026-01-05 02:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:23.964018 | orchestrator | 2026-01-05 02:08:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:23.965868 | orchestrator | 2026-01-05 02:08:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:23.966038 | orchestrator | 2026-01-05 02:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:27.012832 | orchestrator | 2026-01-05 02:08:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:27.014545 | orchestrator | 2026-01-05 02:08:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:27.014622 | orchestrator | 2026-01-05 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:30.054874 | orchestrator | 2026-01-05 02:08:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:30.055739 | orchestrator | 2026-01-05 02:08:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:30.055785 | orchestrator | 2026-01-05 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:33.102126 | orchestrator | 2026-01-05 02:08:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:33.104392 | orchestrator | 2026-01-05 02:08:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:33.104533 | orchestrator | 2026-01-05 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:36.155184 | orchestrator | 2026-01-05 02:08:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:36.157265 | orchestrator | 2026-01-05 02:08:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:36.157328 | orchestrator | 2026-01-05 02:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:39.203008 | orchestrator | 2026-01-05 02:08:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:39.204869 | orchestrator | 2026-01-05 02:08:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:39.204920 | orchestrator | 2026-01-05 02:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:42.262192 | orchestrator | 2026-01-05 02:08:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:42.266766 | orchestrator | 2026-01-05 02:08:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:42.267346 | orchestrator | 2026-01-05 02:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:45.318456 | orchestrator | 2026-01-05 02:08:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:45.319174 | orchestrator | 2026-01-05 02:08:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:45.319206 | orchestrator | 2026-01-05 02:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:48.372948 | orchestrator | 2026-01-05 02:08:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:48.376239 | orchestrator | 2026-01-05 02:08:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:48.376318 | orchestrator | 2026-01-05 02:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:51.421431 | orchestrator | 2026-01-05 02:08:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:51.425962 | orchestrator | 2026-01-05 02:08:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:51.426173 | orchestrator | 2026-01-05 02:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:54.471714 | orchestrator | 2026-01-05 02:08:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:54.472703 | orchestrator | 2026-01-05 02:08:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:54.472750 | orchestrator | 2026-01-05 02:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:08:57.533967 | orchestrator | 2026-01-05 02:08:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:08:57.537161 | orchestrator | 2026-01-05 02:08:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:08:57.537233 | orchestrator | 2026-01-05 02:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:00.591082 | orchestrator | 2026-01-05 02:09:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:00.593199 | orchestrator | 2026-01-05 02:09:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:00.593450 | orchestrator | 2026-01-05 02:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:03.636894 | orchestrator | 2026-01-05 02:09:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:03.637310 | orchestrator | 2026-01-05 02:09:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:03.637332 | orchestrator | 2026-01-05 02:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:06.685272 | orchestrator | 2026-01-05 02:09:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:06.687808 | orchestrator | 2026-01-05 02:09:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:06.687884 | orchestrator | 2026-01-05 02:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:09.734519 | orchestrator | 2026-01-05 02:09:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:09.735702 | orchestrator | 2026-01-05 02:09:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:09.735825 | orchestrator | 2026-01-05 02:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:12.785890 | orchestrator | 2026-01-05 02:09:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:12.788072 | orchestrator | 2026-01-05 02:09:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:12.788128 | orchestrator | 2026-01-05 02:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:15.832467 | orchestrator | 2026-01-05 02:09:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:15.834173 | orchestrator | 2026-01-05 02:09:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:15.834211 | orchestrator | 2026-01-05 02:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:18.891009 | orchestrator | 2026-01-05 02:09:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:18.892445 | orchestrator | 2026-01-05 02:09:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:18.892480 | orchestrator | 2026-01-05 02:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:21.939473 | orchestrator | 2026-01-05 02:09:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:21.941962 | orchestrator | 2026-01-05 02:09:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:21.942131 | orchestrator | 2026-01-05 02:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:24.979586 | orchestrator | 2026-01-05 02:09:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:24.980458 | orchestrator | 2026-01-05 02:09:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:24.980645 | orchestrator | 2026-01-05 02:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:28.031891 | orchestrator | 2026-01-05 02:09:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:28.032907 | orchestrator | 2026-01-05 02:09:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:28.032969 | orchestrator | 2026-01-05 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:31.079792 | orchestrator | 2026-01-05 02:09:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:31.080029 | orchestrator | 2026-01-05 02:09:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:31.080078 | orchestrator | 2026-01-05 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:34.123719 | orchestrator | 2026-01-05 02:09:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:34.125948 | orchestrator | 2026-01-05 02:09:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:34.126009 | orchestrator | 2026-01-05 02:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:37.170192 | orchestrator | 2026-01-05 02:09:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:37.170760 | orchestrator | 2026-01-05 02:09:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:37.170818 | orchestrator | 2026-01-05 02:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:40.209499 | orchestrator | 2026-01-05 02:09:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:40.213658 | orchestrator | 2026-01-05 02:09:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:40.213843 | orchestrator | 2026-01-05 02:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:43.259409 | orchestrator | 2026-01-05 02:09:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:43.261967 | orchestrator | 2026-01-05 02:09:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:43.262957 | orchestrator | 2026-01-05 02:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:46.307690 | orchestrator | 2026-01-05 02:09:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:46.309243 | orchestrator | 2026-01-05 02:09:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:46.309308 | orchestrator | 2026-01-05 02:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:49.357736 | orchestrator | 2026-01-05 02:09:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:49.359665 | orchestrator | 2026-01-05 02:09:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:49.359726 | orchestrator | 2026-01-05 02:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:52.405441 | orchestrator | 2026-01-05 02:09:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:52.407740 | orchestrator | 2026-01-05 02:09:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:52.407856 | orchestrator | 2026-01-05 02:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:55.446978 | orchestrator | 2026-01-05 02:09:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:55.448256 | orchestrator | 2026-01-05 02:09:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:55.448308 | orchestrator | 2026-01-05 02:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:09:58.501805 | orchestrator | 2026-01-05 02:09:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:09:58.503506 | orchestrator | 2026-01-05 02:09:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:09:58.503557 | orchestrator | 2026-01-05 02:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:01.551488 | orchestrator | 2026-01-05 02:10:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:01.553141 | orchestrator | 2026-01-05 02:10:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:01.553237 | orchestrator | 2026-01-05 02:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:04.598097 | orchestrator | 2026-01-05 02:10:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:04.599931 | orchestrator | 2026-01-05 02:10:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:04.599990 | orchestrator | 2026-01-05 02:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:07.643447 | orchestrator | 2026-01-05 02:10:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:07.646011 | orchestrator | 2026-01-05 02:10:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:07.646158 | orchestrator | 2026-01-05 02:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:10.693080 | orchestrator | 2026-01-05 02:10:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:10.694983 | orchestrator | 2026-01-05 02:10:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:10.695029 | orchestrator | 2026-01-05 02:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:13.742296 | orchestrator | 2026-01-05 02:10:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:13.744395 | orchestrator | 2026-01-05 02:10:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:13.744468 | orchestrator | 2026-01-05 02:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:16.791911 | orchestrator | 2026-01-05 02:10:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:16.793184 | orchestrator | 2026-01-05 02:10:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:16.793267 | orchestrator | 2026-01-05 02:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:19.844906 | orchestrator | 2026-01-05 02:10:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:19.847277 | orchestrator | 2026-01-05 02:10:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:19.847399 | orchestrator | 2026-01-05 02:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:22.888773 | orchestrator | 2026-01-05 02:10:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:22.890223 | orchestrator | 2026-01-05 02:10:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:22.890266 | orchestrator | 2026-01-05 02:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:25.935968 | orchestrator | 2026-01-05 02:10:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:25.940595 | orchestrator | 2026-01-05 02:10:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:25.940735 | orchestrator | 2026-01-05 02:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:28.987783 | orchestrator | 2026-01-05 02:10:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:28.989093 | orchestrator | 2026-01-05 02:10:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:28.989133 | orchestrator | 2026-01-05 02:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:32.033742 | orchestrator | 2026-01-05 02:10:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:32.035980 | orchestrator | 2026-01-05 02:10:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:32.036054 | orchestrator | 2026-01-05 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:35.082591 | orchestrator | 2026-01-05 02:10:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:35.083677 | orchestrator | 2026-01-05 02:10:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:35.083716 | orchestrator | 2026-01-05 02:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:38.124795 | orchestrator | 2026-01-05 02:10:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:38.125624 | orchestrator | 2026-01-05 02:10:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:38.125778 | orchestrator | 2026-01-05 02:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:41.173141 | orchestrator | 2026-01-05 02:10:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:41.174749 | orchestrator | 2026-01-05 02:10:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:41.174867 | orchestrator | 2026-01-05 02:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:44.214240 | orchestrator | 2026-01-05 02:10:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:44.217182 | orchestrator | 2026-01-05 02:10:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:44.217535 | orchestrator | 2026-01-05 02:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:47.268914 | orchestrator | 2026-01-05 02:10:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:47.270176 | orchestrator | 2026-01-05 02:10:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:47.270207 | orchestrator | 2026-01-05 02:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:50.312092 | orchestrator | 2026-01-05 02:10:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:50.314075 | orchestrator | 2026-01-05 02:10:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:50.314133 | orchestrator | 2026-01-05 02:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:53.364514 | orchestrator | 2026-01-05 02:10:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:53.366880 | orchestrator | 2026-01-05 02:10:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:53.366969 | orchestrator | 2026-01-05 02:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:56.420568 | orchestrator | 2026-01-05 02:10:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:56.422258 | orchestrator | 2026-01-05 02:10:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:56.422424 | orchestrator | 2026-01-05 02:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:10:59.472532 | orchestrator | 2026-01-05 02:10:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:10:59.474847 | orchestrator | 2026-01-05 02:10:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:10:59.474907 | orchestrator | 2026-01-05 02:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:02.532703 | orchestrator | 2026-01-05 02:11:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:02.534484 | orchestrator | 2026-01-05 02:11:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:02.534595 | orchestrator | 2026-01-05 02:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:05.578394 | orchestrator | 2026-01-05 02:11:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:05.580614 | orchestrator | 2026-01-05 02:11:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:05.580683 | orchestrator | 2026-01-05 02:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:08.623442 | orchestrator | 2026-01-05 02:11:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:08.625491 | orchestrator | 2026-01-05 02:11:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:08.625646 | orchestrator | 2026-01-05 02:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:11.667806 | orchestrator | 2026-01-05 02:11:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:11.670084 | orchestrator | 2026-01-05 02:11:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:11.670195 | orchestrator | 2026-01-05 02:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:14.714925 | orchestrator | 2026-01-05 02:11:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:14.719311 | orchestrator | 2026-01-05 02:11:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:14.719386 | orchestrator | 2026-01-05 02:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:17.769477 | orchestrator | 2026-01-05 02:11:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:17.771222 | orchestrator | 2026-01-05 02:11:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:17.771273 | orchestrator | 2026-01-05 02:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:20.815339 | orchestrator | 2026-01-05 02:11:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:20.815628 | orchestrator | 2026-01-05 02:11:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:20.816444 | orchestrator | 2026-01-05 02:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:23.869322 | orchestrator | 2026-01-05 02:11:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:23.872175 | orchestrator | 2026-01-05 02:11:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:23.872324 | orchestrator | 2026-01-05 02:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:26.920180 | orchestrator | 2026-01-05 02:11:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:26.922444 | orchestrator | 2026-01-05 02:11:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:26.922495 | orchestrator | 2026-01-05 02:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:29.974867 | orchestrator | 2026-01-05 02:11:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:29.976839 | orchestrator | 2026-01-05 02:11:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:29.977002 | orchestrator | 2026-01-05 02:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:33.036667 | orchestrator | 2026-01-05 02:11:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:33.038726 | orchestrator | 2026-01-05 02:11:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:33.038790 | orchestrator | 2026-01-05 02:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:36.092345 | orchestrator | 2026-01-05 02:11:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:36.094186 | orchestrator | 2026-01-05 02:11:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:36.094350 | orchestrator | 2026-01-05 02:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:39.140203 | orchestrator | 2026-01-05 02:11:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:39.140688 | orchestrator | 2026-01-05 02:11:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:39.140890 | orchestrator | 2026-01-05 02:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:42.190078 | orchestrator | 2026-01-05 02:11:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:42.193627 | orchestrator | 2026-01-05 02:11:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:42.193708 | orchestrator | 2026-01-05 02:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:45.242860 | orchestrator | 2026-01-05 02:11:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:45.244495 | orchestrator | 2026-01-05 02:11:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:45.244571 | orchestrator | 2026-01-05 02:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:48.300492 | orchestrator | 2026-01-05 02:11:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:48.303247 | orchestrator | 2026-01-05 02:11:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:48.303416 | orchestrator | 2026-01-05 02:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:51.353741 | orchestrator | 2026-01-05 02:11:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:51.356880 | orchestrator | 2026-01-05 02:11:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:51.357375 | orchestrator | 2026-01-05 02:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:54.408001 | orchestrator | 2026-01-05 02:11:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:54.409013 | orchestrator | 2026-01-05 02:11:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:54.409071 | orchestrator | 2026-01-05 02:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:11:57.456708 | orchestrator | 2026-01-05 02:11:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:11:57.459456 | orchestrator | 2026-01-05 02:11:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:11:57.459592 | orchestrator | 2026-01-05 02:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:00.505257 | orchestrator | 2026-01-05 02:12:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:00.506837 | orchestrator | 2026-01-05 02:12:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:00.506921 | orchestrator | 2026-01-05 02:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:03.553300 | orchestrator | 2026-01-05 02:12:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:03.554689 | orchestrator | 2026-01-05 02:12:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:03.554776 | orchestrator | 2026-01-05 02:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:06.601794 | orchestrator | 2026-01-05 02:12:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:06.603887 | orchestrator | 2026-01-05 02:12:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:06.603928 | orchestrator | 2026-01-05 02:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:09.656655 | orchestrator | 2026-01-05 02:12:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:09.658812 | orchestrator | 2026-01-05 02:12:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:09.658889 | orchestrator | 2026-01-05 02:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:12.708532 | orchestrator | 2026-01-05 02:12:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:12.712043 | orchestrator | 2026-01-05 02:12:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:12.712109 | orchestrator | 2026-01-05 02:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:15.763260 | orchestrator | 2026-01-05 02:12:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:15.766061 | orchestrator | 2026-01-05 02:12:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:15.766298 | orchestrator | 2026-01-05 02:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:18.814770 | orchestrator | 2026-01-05 02:12:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:18.816483 | orchestrator | 2026-01-05 02:12:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:18.816609 | orchestrator | 2026-01-05 02:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:21.865788 | orchestrator | 2026-01-05 02:12:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:21.867479 | orchestrator | 2026-01-05 02:12:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:21.867533 | orchestrator | 2026-01-05 02:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:24.910691 | orchestrator | 2026-01-05 02:12:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:24.912264 | orchestrator | 2026-01-05 02:12:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:24.912628 | orchestrator | 2026-01-05 02:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:27.954884 | orchestrator | 2026-01-05 02:12:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:27.956056 | orchestrator | 2026-01-05 02:12:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:27.956095 | orchestrator | 2026-01-05 02:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:31.005934 | orchestrator | 2026-01-05 02:12:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:31.008314 | orchestrator | 2026-01-05 02:12:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:31.008362 | orchestrator | 2026-01-05 02:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:34.058593 | orchestrator | 2026-01-05 02:12:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:34.060927 | orchestrator | 2026-01-05 02:12:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:34.061030 | orchestrator | 2026-01-05 02:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:37.113456 | orchestrator | 2026-01-05 02:12:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:37.115866 | orchestrator | 2026-01-05 02:12:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:37.115948 | orchestrator | 2026-01-05 02:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:40.158127 | orchestrator | 2026-01-05 02:12:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:40.159813 | orchestrator | 2026-01-05 02:12:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:40.159871 | orchestrator | 2026-01-05 02:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:43.204776 | orchestrator | 2026-01-05 02:12:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:43.206938 | orchestrator | 2026-01-05 02:12:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:43.207013 | orchestrator | 2026-01-05 02:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:46.255850 | orchestrator | 2026-01-05 02:12:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:46.258631 | orchestrator | 2026-01-05 02:12:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:46.258725 | orchestrator | 2026-01-05 02:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:49.309049 | orchestrator | 2026-01-05 02:12:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:49.311068 | orchestrator | 2026-01-05 02:12:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:49.311143 | orchestrator | 2026-01-05 02:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:52.361796 | orchestrator | 2026-01-05 02:12:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:52.363332 | orchestrator | 2026-01-05 02:12:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:52.363391 | orchestrator | 2026-01-05 02:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:55.415082 | orchestrator | 2026-01-05 02:12:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:55.416763 | orchestrator | 2026-01-05 02:12:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:55.416831 | orchestrator | 2026-01-05 02:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:12:58.470731 | orchestrator | 2026-01-05 02:12:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:12:58.474388 | orchestrator | 2026-01-05 02:12:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:12:58.474467 | orchestrator | 2026-01-05 02:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:01.523157 | orchestrator | 2026-01-05 02:13:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:01.524680 | orchestrator | 2026-01-05 02:13:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:01.524756 | orchestrator | 2026-01-05 02:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:04.579308 | orchestrator | 2026-01-05 02:13:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:04.582308 | orchestrator | 2026-01-05 02:13:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:04.582368 | orchestrator | 2026-01-05 02:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:07.625626 | orchestrator | 2026-01-05 02:13:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:07.627054 | orchestrator | 2026-01-05 02:13:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:07.627141 | orchestrator | 2026-01-05 02:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:10.685508 | orchestrator | 2026-01-05 02:13:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:10.688611 | orchestrator | 2026-01-05 02:13:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:10.688685 | orchestrator | 2026-01-05 02:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:13.737147 | orchestrator | 2026-01-05 02:13:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:13.739510 | orchestrator | 2026-01-05 02:13:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:13.739559 | orchestrator | 2026-01-05 02:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:16.792783 | orchestrator | 2026-01-05 02:13:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:16.793846 | orchestrator | 2026-01-05 02:13:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:16.793882 | orchestrator | 2026-01-05 02:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:19.838592 | orchestrator | 2026-01-05 02:13:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:19.839148 | orchestrator | 2026-01-05 02:13:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:19.839219 | orchestrator | 2026-01-05 02:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:22.888503 | orchestrator | 2026-01-05 02:13:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:22.890466 | orchestrator | 2026-01-05 02:13:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:22.890527 | orchestrator | 2026-01-05 02:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:25.939671 | orchestrator | 2026-01-05 02:13:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:25.942248 | orchestrator | 2026-01-05 02:13:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:25.942854 | orchestrator | 2026-01-05 02:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:28.992476 | orchestrator | 2026-01-05 02:13:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:28.994757 | orchestrator | 2026-01-05 02:13:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:28.994875 | orchestrator | 2026-01-05 02:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:32.043981 | orchestrator | 2026-01-05 02:13:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:32.045260 | orchestrator | 2026-01-05 02:13:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:32.045362 | orchestrator | 2026-01-05 02:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:35.099442 | orchestrator | 2026-01-05 02:13:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:35.102198 | orchestrator | 2026-01-05 02:13:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:35.102275 | orchestrator | 2026-01-05 02:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:38.150717 | orchestrator | 2026-01-05 02:13:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:38.152417 | orchestrator | 2026-01-05 02:13:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:38.152486 | orchestrator | 2026-01-05 02:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:41.204099 | orchestrator | 2026-01-05 02:13:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:41.206222 | orchestrator | 2026-01-05 02:13:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:41.206313 | orchestrator | 2026-01-05 02:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:44.250537 | orchestrator | 2026-01-05 02:13:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:44.251420 | orchestrator | 2026-01-05 02:13:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:44.251563 | orchestrator | 2026-01-05 02:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:47.305425 | orchestrator | 2026-01-05 02:13:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:47.307550 | orchestrator | 2026-01-05 02:13:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:47.307615 | orchestrator | 2026-01-05 02:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:50.360245 | orchestrator | 2026-01-05 02:13:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:50.363372 | orchestrator | 2026-01-05 02:13:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:50.363444 | orchestrator | 2026-01-05 02:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:53.413623 | orchestrator | 2026-01-05 02:13:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:53.414757 | orchestrator | 2026-01-05 02:13:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:53.414813 | orchestrator | 2026-01-05 02:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:56.466155 | orchestrator | 2026-01-05 02:13:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:56.469656 | orchestrator | 2026-01-05 02:13:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:56.469734 | orchestrator | 2026-01-05 02:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:13:59.516292 | orchestrator | 2026-01-05 02:13:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:13:59.518460 | orchestrator | 2026-01-05 02:13:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:13:59.518558 | orchestrator | 2026-01-05 02:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:02.562469 | orchestrator | 2026-01-05 02:14:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:02.564854 | orchestrator | 2026-01-05 02:14:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:02.564979 | orchestrator | 2026-01-05 02:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:05.612346 | orchestrator | 2026-01-05 02:14:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:05.615459 | orchestrator | 2026-01-05 02:14:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:05.615515 | orchestrator | 2026-01-05 02:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:08.667434 | orchestrator | 2026-01-05 02:14:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:08.670306 | orchestrator | 2026-01-05 02:14:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:08.670385 | orchestrator | 2026-01-05 02:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:11.718074 | orchestrator | 2026-01-05 02:14:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:11.719641 | orchestrator | 2026-01-05 02:14:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:11.719812 | orchestrator | 2026-01-05 02:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:14.770661 | orchestrator | 2026-01-05 02:14:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:14.772017 | orchestrator | 2026-01-05 02:14:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:14.772202 | orchestrator | 2026-01-05 02:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:17.822476 | orchestrator | 2026-01-05 02:14:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:17.824833 | orchestrator | 2026-01-05 02:14:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:17.824895 | orchestrator | 2026-01-05 02:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:20.865762 | orchestrator | 2026-01-05 02:14:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:20.867195 | orchestrator | 2026-01-05 02:14:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:20.867440 | orchestrator | 2026-01-05 02:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:23.914075 | orchestrator | 2026-01-05 02:14:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:23.915504 | orchestrator | 2026-01-05 02:14:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:23.915546 | orchestrator | 2026-01-05 02:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:26.963315 | orchestrator | 2026-01-05 02:14:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:26.964813 | orchestrator | 2026-01-05 02:14:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:26.964871 | orchestrator | 2026-01-05 02:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:30.016904 | orchestrator | 2026-01-05 02:14:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:30.018958 | orchestrator | 2026-01-05 02:14:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:30.019017 | orchestrator | 2026-01-05 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:33.063999 | orchestrator | 2026-01-05 02:14:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:33.066100 | orchestrator | 2026-01-05 02:14:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:33.066183 | orchestrator | 2026-01-05 02:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:36.116554 | orchestrator | 2026-01-05 02:14:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:36.118000 | orchestrator | 2026-01-05 02:14:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:36.118271 | orchestrator | 2026-01-05 02:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:39.166588 | orchestrator | 2026-01-05 02:14:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:39.167704 | orchestrator | 2026-01-05 02:14:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:39.167741 | orchestrator | 2026-01-05 02:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:42.228758 | orchestrator | 2026-01-05 02:14:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:42.229656 | orchestrator | 2026-01-05 02:14:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:42.229705 | orchestrator | 2026-01-05 02:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:45.282246 | orchestrator | 2026-01-05 02:14:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:45.286163 | orchestrator | 2026-01-05 02:14:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:45.286245 | orchestrator | 2026-01-05 02:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:48.329812 | orchestrator | 2026-01-05 02:14:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:48.333386 | orchestrator | 2026-01-05 02:14:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:48.333457 | orchestrator | 2026-01-05 02:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:51.384556 | orchestrator | 2026-01-05 02:14:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:51.388507 | orchestrator | 2026-01-05 02:14:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:51.388569 | orchestrator | 2026-01-05 02:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:54.434977 | orchestrator | 2026-01-05 02:14:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:54.435926 | orchestrator | 2026-01-05 02:14:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:54.436122 | orchestrator | 2026-01-05 02:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:14:57.484911 | orchestrator | 2026-01-05 02:14:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:14:57.486277 | orchestrator | 2026-01-05 02:14:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:14:57.486383 | orchestrator | 2026-01-05 02:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:00.536542 | orchestrator | 2026-01-05 02:15:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:00.537768 | orchestrator | 2026-01-05 02:15:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:00.537846 | orchestrator | 2026-01-05 02:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:03.583959 | orchestrator | 2026-01-05 02:15:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:03.586482 | orchestrator | 2026-01-05 02:15:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:03.586560 | orchestrator | 2026-01-05 02:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:06.638716 | orchestrator | 2026-01-05 02:15:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:06.641306 | orchestrator | 2026-01-05 02:15:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:06.641430 | orchestrator | 2026-01-05 02:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:09.692805 | orchestrator | 2026-01-05 02:15:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:09.695542 | orchestrator | 2026-01-05 02:15:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:09.695626 | orchestrator | 2026-01-05 02:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:12.745589 | orchestrator | 2026-01-05 02:15:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:12.747813 | orchestrator | 2026-01-05 02:15:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:12.747869 | orchestrator | 2026-01-05 02:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:15.803252 | orchestrator | 2026-01-05 02:15:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:15.805191 | orchestrator | 2026-01-05 02:15:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:15.805247 | orchestrator | 2026-01-05 02:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:18.848522 | orchestrator | 2026-01-05 02:15:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:18.849799 | orchestrator | 2026-01-05 02:15:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:18.849837 | orchestrator | 2026-01-05 02:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:21.907689 | orchestrator | 2026-01-05 02:15:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:21.909546 | orchestrator | 2026-01-05 02:15:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:21.909596 | orchestrator | 2026-01-05 02:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:24.964639 | orchestrator | 2026-01-05 02:15:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:25.259689 | orchestrator | 2026-01-05 02:15:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:25.259746 | orchestrator | 2026-01-05 02:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:28.010362 | orchestrator | 2026-01-05 02:15:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:28.011700 | orchestrator | 2026-01-05 02:15:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:28.012508 | orchestrator | 2026-01-05 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:31.064811 | orchestrator | 2026-01-05 02:15:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:31.066564 | orchestrator | 2026-01-05 02:15:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:31.066614 | orchestrator | 2026-01-05 02:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:34.108440 | orchestrator | 2026-01-05 02:15:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:34.110244 | orchestrator | 2026-01-05 02:15:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:34.110315 | orchestrator | 2026-01-05 02:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:37.155966 | orchestrator | 2026-01-05 02:15:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:37.156815 | orchestrator | 2026-01-05 02:15:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:37.156890 | orchestrator | 2026-01-05 02:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:40.206389 | orchestrator | 2026-01-05 02:15:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:40.208715 | orchestrator | 2026-01-05 02:15:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:40.208817 | orchestrator | 2026-01-05 02:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:43.259353 | orchestrator | 2026-01-05 02:15:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:43.260585 | orchestrator | 2026-01-05 02:15:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:43.260646 | orchestrator | 2026-01-05 02:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:46.307227 | orchestrator | 2026-01-05 02:15:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:46.307775 | orchestrator | 2026-01-05 02:15:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:46.307799 | orchestrator | 2026-01-05 02:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:49.355833 | orchestrator | 2026-01-05 02:15:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:49.357223 | orchestrator | 2026-01-05 02:15:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:49.357256 | orchestrator | 2026-01-05 02:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:52.399990 | orchestrator | 2026-01-05 02:15:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:52.400975 | orchestrator | 2026-01-05 02:15:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:52.401051 | orchestrator | 2026-01-05 02:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:55.448036 | orchestrator | 2026-01-05 02:15:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:55.448259 | orchestrator | 2026-01-05 02:15:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:55.448281 | orchestrator | 2026-01-05 02:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:15:58.501883 | orchestrator | 2026-01-05 02:15:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:15:58.503497 | orchestrator | 2026-01-05 02:15:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:15:58.503546 | orchestrator | 2026-01-05 02:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:01.558439 | orchestrator | 2026-01-05 02:16:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:01.562491 | orchestrator | 2026-01-05 02:16:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:01.562591 | orchestrator | 2026-01-05 02:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:04.614747 | orchestrator | 2026-01-05 02:16:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:04.618637 | orchestrator | 2026-01-05 02:16:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:04.618757 | orchestrator | 2026-01-05 02:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:07.663610 | orchestrator | 2026-01-05 02:16:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:07.664354 | orchestrator | 2026-01-05 02:16:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:07.664603 | orchestrator | 2026-01-05 02:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:10.710248 | orchestrator | 2026-01-05 02:16:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:10.712734 | orchestrator | 2026-01-05 02:16:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:10.712804 | orchestrator | 2026-01-05 02:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:13.760413 | orchestrator | 2026-01-05 02:16:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:13.762881 | orchestrator | 2026-01-05 02:16:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:13.762944 | orchestrator | 2026-01-05 02:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:16.813654 | orchestrator | 2026-01-05 02:16:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:16.816119 | orchestrator | 2026-01-05 02:16:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:16.816188 | orchestrator | 2026-01-05 02:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:19.870953 | orchestrator | 2026-01-05 02:16:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:19.872253 | orchestrator | 2026-01-05 02:16:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:19.872296 | orchestrator | 2026-01-05 02:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:22.922410 | orchestrator | 2026-01-05 02:16:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:22.924400 | orchestrator | 2026-01-05 02:16:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:22.924460 | orchestrator | 2026-01-05 02:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:25.967402 | orchestrator | 2026-01-05 02:16:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:25.969200 | orchestrator | 2026-01-05 02:16:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:25.969459 | orchestrator | 2026-01-05 02:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:29.021164 | orchestrator | 2026-01-05 02:16:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:29.021268 | orchestrator | 2026-01-05 02:16:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:29.021278 | orchestrator | 2026-01-05 02:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:32.061905 | orchestrator | 2026-01-05 02:16:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:32.063933 | orchestrator | 2026-01-05 02:16:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:32.063997 | orchestrator | 2026-01-05 02:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:35.119647 | orchestrator | 2026-01-05 02:16:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:35.121820 | orchestrator | 2026-01-05 02:16:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:35.121918 | orchestrator | 2026-01-05 02:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:38.167546 | orchestrator | 2026-01-05 02:16:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:38.168317 | orchestrator | 2026-01-05 02:16:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:38.168363 | orchestrator | 2026-01-05 02:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:41.220040 | orchestrator | 2026-01-05 02:16:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:41.221510 | orchestrator | 2026-01-05 02:16:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:41.221553 | orchestrator | 2026-01-05 02:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:44.259275 | orchestrator | 2026-01-05 02:16:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:44.261162 | orchestrator | 2026-01-05 02:16:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:44.261227 | orchestrator | 2026-01-05 02:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:47.311344 | orchestrator | 2026-01-05 02:16:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:47.313877 | orchestrator | 2026-01-05 02:16:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:47.313974 | orchestrator | 2026-01-05 02:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:50.363894 | orchestrator | 2026-01-05 02:16:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:50.366482 | orchestrator | 2026-01-05 02:16:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:50.366614 | orchestrator | 2026-01-05 02:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:53.423838 | orchestrator | 2026-01-05 02:16:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:53.424875 | orchestrator | 2026-01-05 02:16:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:53.425151 | orchestrator | 2026-01-05 02:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:56.472588 | orchestrator | 2026-01-05 02:16:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:56.475137 | orchestrator | 2026-01-05 02:16:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:56.475212 | orchestrator | 2026-01-05 02:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:16:59.522569 | orchestrator | 2026-01-05 02:16:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:16:59.524111 | orchestrator | 2026-01-05 02:16:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:16:59.524155 | orchestrator | 2026-01-05 02:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:02.573530 | orchestrator | 2026-01-05 02:17:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:02.574206 | orchestrator | 2026-01-05 02:17:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:02.574244 | orchestrator | 2026-01-05 02:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:05.618527 | orchestrator | 2026-01-05 02:17:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:05.620429 | orchestrator | 2026-01-05 02:17:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:05.620504 | orchestrator | 2026-01-05 02:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:08.666981 | orchestrator | 2026-01-05 02:17:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:08.669197 | orchestrator | 2026-01-05 02:17:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:08.669430 | orchestrator | 2026-01-05 02:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:11.719959 | orchestrator | 2026-01-05 02:17:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:11.722557 | orchestrator | 2026-01-05 02:17:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:11.722719 | orchestrator | 2026-01-05 02:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:14.778963 | orchestrator | 2026-01-05 02:17:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:14.781613 | orchestrator | 2026-01-05 02:17:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:14.781734 | orchestrator | 2026-01-05 02:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:17.843525 | orchestrator | 2026-01-05 02:17:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:17.844570 | orchestrator | 2026-01-05 02:17:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:17.844612 | orchestrator | 2026-01-05 02:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:20.895544 | orchestrator | 2026-01-05 02:17:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:20.897545 | orchestrator | 2026-01-05 02:17:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:20.897572 | orchestrator | 2026-01-05 02:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:23.949441 | orchestrator | 2026-01-05 02:17:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:23.951562 | orchestrator | 2026-01-05 02:17:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:23.951636 | orchestrator | 2026-01-05 02:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:26.998576 | orchestrator | 2026-01-05 02:17:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:27.001664 | orchestrator | 2026-01-05 02:17:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:27.001768 | orchestrator | 2026-01-05 02:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:30.039849 | orchestrator | 2026-01-05 02:17:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:30.042320 | orchestrator | 2026-01-05 02:17:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:30.042391 | orchestrator | 2026-01-05 02:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:33.089403 | orchestrator | 2026-01-05 02:17:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:33.091237 | orchestrator | 2026-01-05 02:17:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:33.091296 | orchestrator | 2026-01-05 02:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:36.137968 | orchestrator | 2026-01-05 02:17:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:36.138901 | orchestrator | 2026-01-05 02:17:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:36.138956 | orchestrator | 2026-01-05 02:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:39.186978 | orchestrator | 2026-01-05 02:17:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:39.188749 | orchestrator | 2026-01-05 02:17:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:39.188840 | orchestrator | 2026-01-05 02:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:42.240207 | orchestrator | 2026-01-05 02:17:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:42.242253 | orchestrator | 2026-01-05 02:17:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:42.242299 | orchestrator | 2026-01-05 02:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:45.297542 | orchestrator | 2026-01-05 02:17:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:45.299460 | orchestrator | 2026-01-05 02:17:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:45.299586 | orchestrator | 2026-01-05 02:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:48.352826 | orchestrator | 2026-01-05 02:17:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:48.354960 | orchestrator | 2026-01-05 02:17:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:48.355010 | orchestrator | 2026-01-05 02:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:51.403201 | orchestrator | 2026-01-05 02:17:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:51.405070 | orchestrator | 2026-01-05 02:17:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:51.405151 | orchestrator | 2026-01-05 02:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:54.451932 | orchestrator | 2026-01-05 02:17:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:54.453963 | orchestrator | 2026-01-05 02:17:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:54.454127 | orchestrator | 2026-01-05 02:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:17:57.501321 | orchestrator | 2026-01-05 02:17:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:17:57.502720 | orchestrator | 2026-01-05 02:17:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:17:57.502773 | orchestrator | 2026-01-05 02:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:00.555485 | orchestrator | 2026-01-05 02:18:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:00.557866 | orchestrator | 2026-01-05 02:18:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:00.557919 | orchestrator | 2026-01-05 02:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:03.621155 | orchestrator | 2026-01-05 02:18:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:03.625490 | orchestrator | 2026-01-05 02:18:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:03.626853 | orchestrator | 2026-01-05 02:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:06.671647 | orchestrator | 2026-01-05 02:18:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:06.674411 | orchestrator | 2026-01-05 02:18:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:06.674513 | orchestrator | 2026-01-05 02:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:09.724508 | orchestrator | 2026-01-05 02:18:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:09.726359 | orchestrator | 2026-01-05 02:18:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:09.726501 | orchestrator | 2026-01-05 02:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:12.780090 | orchestrator | 2026-01-05 02:18:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:12.781679 | orchestrator | 2026-01-05 02:18:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:12.781920 | orchestrator | 2026-01-05 02:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:15.831208 | orchestrator | 2026-01-05 02:18:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:15.833065 | orchestrator | 2026-01-05 02:18:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:15.833141 | orchestrator | 2026-01-05 02:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:18.886395 | orchestrator | 2026-01-05 02:18:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:18.888173 | orchestrator | 2026-01-05 02:18:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:18.888245 | orchestrator | 2026-01-05 02:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:21.939062 | orchestrator | 2026-01-05 02:18:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:21.941124 | orchestrator | 2026-01-05 02:18:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:21.941182 | orchestrator | 2026-01-05 02:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:24.994436 | orchestrator | 2026-01-05 02:18:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:24.996493 | orchestrator | 2026-01-05 02:18:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:24.996611 | orchestrator | 2026-01-05 02:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:28.041686 | orchestrator | 2026-01-05 02:18:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:28.044866 | orchestrator | 2026-01-05 02:18:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:28.044965 | orchestrator | 2026-01-05 02:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:31.098367 | orchestrator | 2026-01-05 02:18:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:31.100325 | orchestrator | 2026-01-05 02:18:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:31.100457 | orchestrator | 2026-01-05 02:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:34.147604 | orchestrator | 2026-01-05 02:18:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:34.149132 | orchestrator | 2026-01-05 02:18:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:34.149167 | orchestrator | 2026-01-05 02:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:37.191268 | orchestrator | 2026-01-05 02:18:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:37.192786 | orchestrator | 2026-01-05 02:18:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:37.192846 | orchestrator | 2026-01-05 02:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:40.246399 | orchestrator | 2026-01-05 02:18:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:40.249264 | orchestrator | 2026-01-05 02:18:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:40.249394 | orchestrator | 2026-01-05 02:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:43.296563 | orchestrator | 2026-01-05 02:18:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:43.298845 | orchestrator | 2026-01-05 02:18:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:43.299241 | orchestrator | 2026-01-05 02:18:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:46.355376 | orchestrator | 2026-01-05 02:18:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:46.357230 | orchestrator | 2026-01-05 02:18:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:46.357361 | orchestrator | 2026-01-05 02:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:49.397939 | orchestrator | 2026-01-05 02:18:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:49.399538 | orchestrator | 2026-01-05 02:18:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:49.399572 | orchestrator | 2026-01-05 02:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:52.455856 | orchestrator | 2026-01-05 02:18:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:52.458142 | orchestrator | 2026-01-05 02:18:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:52.458285 | orchestrator | 2026-01-05 02:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:55.507880 | orchestrator | 2026-01-05 02:18:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:55.509123 | orchestrator | 2026-01-05 02:18:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:55.509192 | orchestrator | 2026-01-05 02:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:18:58.558129 | orchestrator | 2026-01-05 02:18:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:18:58.561197 | orchestrator | 2026-01-05 02:18:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:18:58.561274 | orchestrator | 2026-01-05 02:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:01.607384 | orchestrator | 2026-01-05 02:19:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:01.609596 | orchestrator | 2026-01-05 02:19:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:01.609734 | orchestrator | 2026-01-05 02:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:04.658099 | orchestrator | 2026-01-05 02:19:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:04.660143 | orchestrator | 2026-01-05 02:19:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:04.660292 | orchestrator | 2026-01-05 02:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:07.704915 | orchestrator | 2026-01-05 02:19:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:07.706732 | orchestrator | 2026-01-05 02:19:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:07.706789 | orchestrator | 2026-01-05 02:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:10.752850 | orchestrator | 2026-01-05 02:19:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:10.755673 | orchestrator | 2026-01-05 02:19:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:10.755750 | orchestrator | 2026-01-05 02:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:13.808174 | orchestrator | 2026-01-05 02:19:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:13.811502 | orchestrator | 2026-01-05 02:19:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:13.811634 | orchestrator | 2026-01-05 02:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:16.864015 | orchestrator | 2026-01-05 02:19:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:16.866407 | orchestrator | 2026-01-05 02:19:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:16.866460 | orchestrator | 2026-01-05 02:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:19.919095 | orchestrator | 2026-01-05 02:19:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:19.921817 | orchestrator | 2026-01-05 02:19:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:19.921890 | orchestrator | 2026-01-05 02:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:22.966169 | orchestrator | 2026-01-05 02:19:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:22.967796 | orchestrator | 2026-01-05 02:19:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:22.967848 | orchestrator | 2026-01-05 02:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:26.010700 | orchestrator | 2026-01-05 02:19:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:26.012418 | orchestrator | 2026-01-05 02:19:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:26.012465 | orchestrator | 2026-01-05 02:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:29.058627 | orchestrator | 2026-01-05 02:19:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:29.058726 | orchestrator | 2026-01-05 02:19:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:29.058738 | orchestrator | 2026-01-05 02:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:32.109777 | orchestrator | 2026-01-05 02:19:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:32.111326 | orchestrator | 2026-01-05 02:19:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:32.111439 | orchestrator | 2026-01-05 02:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:35.158235 | orchestrator | 2026-01-05 02:19:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:35.159815 | orchestrator | 2026-01-05 02:19:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:35.159965 | orchestrator | 2026-01-05 02:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:38.201956 | orchestrator | 2026-01-05 02:19:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:38.203678 | orchestrator | 2026-01-05 02:19:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:38.203750 | orchestrator | 2026-01-05 02:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:41.244060 | orchestrator | 2026-01-05 02:19:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:41.245798 | orchestrator | 2026-01-05 02:19:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:41.245855 | orchestrator | 2026-01-05 02:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:44.295621 | orchestrator | 2026-01-05 02:19:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:44.296507 | orchestrator | 2026-01-05 02:19:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:44.296550 | orchestrator | 2026-01-05 02:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:47.343264 | orchestrator | 2026-01-05 02:19:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:47.344844 | orchestrator | 2026-01-05 02:19:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:47.345122 | orchestrator | 2026-01-05 02:19:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:50.394323 | orchestrator | 2026-01-05 02:19:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:50.395423 | orchestrator | 2026-01-05 02:19:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:50.395479 | orchestrator | 2026-01-05 02:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:53.445500 | orchestrator | 2026-01-05 02:19:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:53.446819 | orchestrator | 2026-01-05 02:19:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:53.446856 | orchestrator | 2026-01-05 02:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:56.491706 | orchestrator | 2026-01-05 02:19:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:56.493328 | orchestrator | 2026-01-05 02:19:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:56.493567 | orchestrator | 2026-01-05 02:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:19:59.543044 | orchestrator | 2026-01-05 02:19:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:19:59.544249 | orchestrator | 2026-01-05 02:19:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:19:59.544303 | orchestrator | 2026-01-05 02:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:02.585503 | orchestrator | 2026-01-05 02:20:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:02.589053 | orchestrator | 2026-01-05 02:20:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:02.589161 | orchestrator | 2026-01-05 02:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:05.636591 | orchestrator | 2026-01-05 02:20:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:05.640270 | orchestrator | 2026-01-05 02:20:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:05.640324 | orchestrator | 2026-01-05 02:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:08.694185 | orchestrator | 2026-01-05 02:20:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:08.697272 | orchestrator | 2026-01-05 02:20:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:08.697377 | orchestrator | 2026-01-05 02:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:11.750848 | orchestrator | 2026-01-05 02:20:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:11.753615 | orchestrator | 2026-01-05 02:20:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:11.753782 | orchestrator | 2026-01-05 02:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:14.805248 | orchestrator | 2026-01-05 02:20:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:14.807377 | orchestrator | 2026-01-05 02:20:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:14.807506 | orchestrator | 2026-01-05 02:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:17.857235 | orchestrator | 2026-01-05 02:20:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:17.857429 | orchestrator | 2026-01-05 02:20:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:17.857635 | orchestrator | 2026-01-05 02:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:20.903753 | orchestrator | 2026-01-05 02:20:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:20.906511 | orchestrator | 2026-01-05 02:20:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:20.906588 | orchestrator | 2026-01-05 02:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:23.953800 | orchestrator | 2026-01-05 02:20:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:23.956367 | orchestrator | 2026-01-05 02:20:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:23.956460 | orchestrator | 2026-01-05 02:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:27.005144 | orchestrator | 2026-01-05 02:20:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:27.006828 | orchestrator | 2026-01-05 02:20:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:27.007285 | orchestrator | 2026-01-05 02:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:30.051632 | orchestrator | 2026-01-05 02:20:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:30.053078 | orchestrator | 2026-01-05 02:20:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:30.053119 | orchestrator | 2026-01-05 02:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:33.099124 | orchestrator | 2026-01-05 02:20:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:33.100106 | orchestrator | 2026-01-05 02:20:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:33.100441 | orchestrator | 2026-01-05 02:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:36.148906 | orchestrator | 2026-01-05 02:20:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:36.150385 | orchestrator | 2026-01-05 02:20:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:36.150448 | orchestrator | 2026-01-05 02:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:39.200079 | orchestrator | 2026-01-05 02:20:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:39.202124 | orchestrator | 2026-01-05 02:20:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:39.202206 | orchestrator | 2026-01-05 02:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:42.260983 | orchestrator | 2026-01-05 02:20:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:42.263183 | orchestrator | 2026-01-05 02:20:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:42.263258 | orchestrator | 2026-01-05 02:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:45.305790 | orchestrator | 2026-01-05 02:20:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:45.308338 | orchestrator | 2026-01-05 02:20:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:45.308417 | orchestrator | 2026-01-05 02:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:48.358144 | orchestrator | 2026-01-05 02:20:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:48.361222 | orchestrator | 2026-01-05 02:20:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:48.361364 | orchestrator | 2026-01-05 02:20:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:51.417769 | orchestrator | 2026-01-05 02:20:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:51.419837 | orchestrator | 2026-01-05 02:20:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:51.419899 | orchestrator | 2026-01-05 02:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:54.466142 | orchestrator | 2026-01-05 02:20:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:54.468541 | orchestrator | 2026-01-05 02:20:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:54.468659 | orchestrator | 2026-01-05 02:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:20:57.518977 | orchestrator | 2026-01-05 02:20:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:20:57.521000 | orchestrator | 2026-01-05 02:20:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:20:57.521050 | orchestrator | 2026-01-05 02:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:00.569181 | orchestrator | 2026-01-05 02:21:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:00.570066 | orchestrator | 2026-01-05 02:21:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:00.570107 | orchestrator | 2026-01-05 02:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:03.614596 | orchestrator | 2026-01-05 02:21:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:03.616248 | orchestrator | 2026-01-05 02:21:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:03.616299 | orchestrator | 2026-01-05 02:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:06.665890 | orchestrator | 2026-01-05 02:21:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:06.667744 | orchestrator | 2026-01-05 02:21:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:06.668259 | orchestrator | 2026-01-05 02:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:09.722200 | orchestrator | 2026-01-05 02:21:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:09.725229 | orchestrator | 2026-01-05 02:21:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:09.725902 | orchestrator | 2026-01-05 02:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:12.775503 | orchestrator | 2026-01-05 02:21:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:12.778327 | orchestrator | 2026-01-05 02:21:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:12.778421 | orchestrator | 2026-01-05 02:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:15.826180 | orchestrator | 2026-01-05 02:21:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:15.827783 | orchestrator | 2026-01-05 02:21:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:15.827852 | orchestrator | 2026-01-05 02:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:18.875327 | orchestrator | 2026-01-05 02:21:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:18.878605 | orchestrator | 2026-01-05 02:21:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:18.878694 | orchestrator | 2026-01-05 02:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:21.927842 | orchestrator | 2026-01-05 02:21:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:21.929864 | orchestrator | 2026-01-05 02:21:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:21.929911 | orchestrator | 2026-01-05 02:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:24.974868 | orchestrator | 2026-01-05 02:21:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:24.975992 | orchestrator | 2026-01-05 02:21:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:24.976064 | orchestrator | 2026-01-05 02:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:28.029085 | orchestrator | 2026-01-05 02:21:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:28.032603 | orchestrator | 2026-01-05 02:21:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:28.032692 | orchestrator | 2026-01-05 02:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:31.084212 | orchestrator | 2026-01-05 02:21:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:31.085651 | orchestrator | 2026-01-05 02:21:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:31.085719 | orchestrator | 2026-01-05 02:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:34.129800 | orchestrator | 2026-01-05 02:21:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:34.133187 | orchestrator | 2026-01-05 02:21:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:34.133276 | orchestrator | 2026-01-05 02:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:37.186295 | orchestrator | 2026-01-05 02:21:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:37.189814 | orchestrator | 2026-01-05 02:21:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:37.190299 | orchestrator | 2026-01-05 02:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:40.244651 | orchestrator | 2026-01-05 02:21:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:40.245271 | orchestrator | 2026-01-05 02:21:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:40.245370 | orchestrator | 2026-01-05 02:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:43.296777 | orchestrator | 2026-01-05 02:21:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:43.301231 | orchestrator | 2026-01-05 02:21:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:43.301380 | orchestrator | 2026-01-05 02:21:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:46.357509 | orchestrator | 2026-01-05 02:21:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:46.360045 | orchestrator | 2026-01-05 02:21:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:46.360113 | orchestrator | 2026-01-05 02:21:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:49.416053 | orchestrator | 2026-01-05 02:21:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:49.419993 | orchestrator | 2026-01-05 02:21:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:49.420073 | orchestrator | 2026-01-05 02:21:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:52.465992 | orchestrator | 2026-01-05 02:21:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:52.467304 | orchestrator | 2026-01-05 02:21:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:52.467344 | orchestrator | 2026-01-05 02:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:55.523486 | orchestrator | 2026-01-05 02:21:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:55.525580 | orchestrator | 2026-01-05 02:21:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:55.525635 | orchestrator | 2026-01-05 02:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:21:58.572177 | orchestrator | 2026-01-05 02:21:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:21:58.573592 | orchestrator | 2026-01-05 02:21:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:21:58.573639 | orchestrator | 2026-01-05 02:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:01.621282 | orchestrator | 2026-01-05 02:22:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:01.623082 | orchestrator | 2026-01-05 02:22:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:01.623166 | orchestrator | 2026-01-05 02:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:04.673597 | orchestrator | 2026-01-05 02:22:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:04.676682 | orchestrator | 2026-01-05 02:22:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:04.676787 | orchestrator | 2026-01-05 02:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:07.725782 | orchestrator | 2026-01-05 02:22:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:07.728187 | orchestrator | 2026-01-05 02:22:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:07.728240 | orchestrator | 2026-01-05 02:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:10.782091 | orchestrator | 2026-01-05 02:22:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:10.783860 | orchestrator | 2026-01-05 02:22:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:10.783961 | orchestrator | 2026-01-05 02:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:13.828834 | orchestrator | 2026-01-05 02:22:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:13.830230 | orchestrator | 2026-01-05 02:22:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:13.830283 | orchestrator | 2026-01-05 02:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:16.880873 | orchestrator | 2026-01-05 02:22:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:16.883532 | orchestrator | 2026-01-05 02:22:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:16.883857 | orchestrator | 2026-01-05 02:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:19.931294 | orchestrator | 2026-01-05 02:22:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:19.932880 | orchestrator | 2026-01-05 02:22:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:19.933168 | orchestrator | 2026-01-05 02:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:22.981650 | orchestrator | 2026-01-05 02:22:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:22.981808 | orchestrator | 2026-01-05 02:22:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:22.981819 | orchestrator | 2026-01-05 02:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:26.041744 | orchestrator | 2026-01-05 02:22:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:26.044800 | orchestrator | 2026-01-05 02:22:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:26.045119 | orchestrator | 2026-01-05 02:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:29.085597 | orchestrator | 2026-01-05 02:22:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:29.086696 | orchestrator | 2026-01-05 02:22:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:29.086742 | orchestrator | 2026-01-05 02:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:32.133143 | orchestrator | 2026-01-05 02:22:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:32.133738 | orchestrator | 2026-01-05 02:22:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:32.134119 | orchestrator | 2026-01-05 02:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:35.181878 | orchestrator | 2026-01-05 02:22:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:35.182971 | orchestrator | 2026-01-05 02:22:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:35.183036 | orchestrator | 2026-01-05 02:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:38.238093 | orchestrator | 2026-01-05 02:22:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:38.239663 | orchestrator | 2026-01-05 02:22:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:38.239886 | orchestrator | 2026-01-05 02:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:41.287348 | orchestrator | 2026-01-05 02:22:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:41.290261 | orchestrator | 2026-01-05 02:22:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:41.290433 | orchestrator | 2026-01-05 02:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:44.344239 | orchestrator | 2026-01-05 02:22:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:44.345158 | orchestrator | 2026-01-05 02:22:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:44.345214 | orchestrator | 2026-01-05 02:22:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:47.391242 | orchestrator | 2026-01-05 02:22:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:47.393409 | orchestrator | 2026-01-05 02:22:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:47.640484 | orchestrator | 2026-01-05 02:22:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:50.442427 | orchestrator | 2026-01-05 02:22:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:50.443570 | orchestrator | 2026-01-05 02:22:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:50.443626 | orchestrator | 2026-01-05 02:22:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:53.492347 | orchestrator | 2026-01-05 02:22:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:53.494987 | orchestrator | 2026-01-05 02:22:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:53.495046 | orchestrator | 2026-01-05 02:22:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:56.542665 | orchestrator | 2026-01-05 02:22:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:56.544825 | orchestrator | 2026-01-05 02:22:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:56.544867 | orchestrator | 2026-01-05 02:22:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:22:59.585503 | orchestrator | 2026-01-05 02:22:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:22:59.587131 | orchestrator | 2026-01-05 02:22:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:22:59.587336 | orchestrator | 2026-01-05 02:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:02.634965 | orchestrator | 2026-01-05 02:23:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:02.638311 | orchestrator | 2026-01-05 02:23:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:02.638397 | orchestrator | 2026-01-05 02:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:05.686344 | orchestrator | 2026-01-05 02:23:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:05.689276 | orchestrator | 2026-01-05 02:23:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:05.689357 | orchestrator | 2026-01-05 02:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:08.743458 | orchestrator | 2026-01-05 02:23:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:08.745848 | orchestrator | 2026-01-05 02:23:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:08.745942 | orchestrator | 2026-01-05 02:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:11.792671 | orchestrator | 2026-01-05 02:23:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:11.794783 | orchestrator | 2026-01-05 02:23:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:11.794882 | orchestrator | 2026-01-05 02:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:14.847436 | orchestrator | 2026-01-05 02:23:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:14.849048 | orchestrator | 2026-01-05 02:23:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:14.849080 | orchestrator | 2026-01-05 02:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:17.899471 | orchestrator | 2026-01-05 02:23:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:17.901245 | orchestrator | 2026-01-05 02:23:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:17.901309 | orchestrator | 2026-01-05 02:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:20.953181 | orchestrator | 2026-01-05 02:23:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:20.954870 | orchestrator | 2026-01-05 02:23:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:20.955084 | orchestrator | 2026-01-05 02:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:23.998199 | orchestrator | 2026-01-05 02:23:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:24.004687 | orchestrator | 2026-01-05 02:23:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:24.004786 | orchestrator | 2026-01-05 02:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:27.049226 | orchestrator | 2026-01-05 02:23:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:27.050510 | orchestrator | 2026-01-05 02:23:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:27.050578 | orchestrator | 2026-01-05 02:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:30.092146 | orchestrator | 2026-01-05 02:23:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:30.093285 | orchestrator | 2026-01-05 02:23:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:30.093337 | orchestrator | 2026-01-05 02:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:33.141134 | orchestrator | 2026-01-05 02:23:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:33.144432 | orchestrator | 2026-01-05 02:23:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:33.144517 | orchestrator | 2026-01-05 02:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:36.188869 | orchestrator | 2026-01-05 02:23:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:36.190294 | orchestrator | 2026-01-05 02:23:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:36.190360 | orchestrator | 2026-01-05 02:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:39.236455 | orchestrator | 2026-01-05 02:23:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:39.239282 | orchestrator | 2026-01-05 02:23:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:39.239340 | orchestrator | 2026-01-05 02:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:42.293869 | orchestrator | 2026-01-05 02:23:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:42.295524 | orchestrator | 2026-01-05 02:23:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:42.295658 | orchestrator | 2026-01-05 02:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:45.346814 | orchestrator | 2026-01-05 02:23:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:45.348276 | orchestrator | 2026-01-05 02:23:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:45.348337 | orchestrator | 2026-01-05 02:23:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:48.400940 | orchestrator | 2026-01-05 02:23:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:48.403517 | orchestrator | 2026-01-05 02:23:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:48.403590 | orchestrator | 2026-01-05 02:23:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:51.454056 | orchestrator | 2026-01-05 02:23:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:51.455596 | orchestrator | 2026-01-05 02:23:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:51.455854 | orchestrator | 2026-01-05 02:23:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:54.505473 | orchestrator | 2026-01-05 02:23:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:54.507068 | orchestrator | 2026-01-05 02:23:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:54.507138 | orchestrator | 2026-01-05 02:23:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:23:57.556465 | orchestrator | 2026-01-05 02:23:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:23:57.559113 | orchestrator | 2026-01-05 02:23:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:23:57.559163 | orchestrator | 2026-01-05 02:23:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:00.607529 | orchestrator | 2026-01-05 02:24:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:00.609371 | orchestrator | 2026-01-05 02:24:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:00.609437 | orchestrator | 2026-01-05 02:24:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:03.662431 | orchestrator | 2026-01-05 02:24:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:03.663320 | orchestrator | 2026-01-05 02:24:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:03.663375 | orchestrator | 2026-01-05 02:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:06.712066 | orchestrator | 2026-01-05 02:24:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:06.713256 | orchestrator | 2026-01-05 02:24:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:06.713475 | orchestrator | 2026-01-05 02:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:09.769041 | orchestrator | 2026-01-05 02:24:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:09.771192 | orchestrator | 2026-01-05 02:24:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:09.771315 | orchestrator | 2026-01-05 02:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:12.821804 | orchestrator | 2026-01-05 02:24:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:12.824081 | orchestrator | 2026-01-05 02:24:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:12.824155 | orchestrator | 2026-01-05 02:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:15.878311 | orchestrator | 2026-01-05 02:24:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:15.879434 | orchestrator | 2026-01-05 02:24:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:15.879481 | orchestrator | 2026-01-05 02:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:18.925477 | orchestrator | 2026-01-05 02:24:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:18.928064 | orchestrator | 2026-01-05 02:24:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:18.928133 | orchestrator | 2026-01-05 02:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:21.983231 | orchestrator | 2026-01-05 02:24:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:21.984821 | orchestrator | 2026-01-05 02:24:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:21.984865 | orchestrator | 2026-01-05 02:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:25.030327 | orchestrator | 2026-01-05 02:24:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:25.032858 | orchestrator | 2026-01-05 02:24:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:25.033177 | orchestrator | 2026-01-05 02:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:28.077424 | orchestrator | 2026-01-05 02:24:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:28.079381 | orchestrator | 2026-01-05 02:24:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:28.079454 | orchestrator | 2026-01-05 02:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:31.124310 | orchestrator | 2026-01-05 02:24:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:31.125643 | orchestrator | 2026-01-05 02:24:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:31.125687 | orchestrator | 2026-01-05 02:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:34.171230 | orchestrator | 2026-01-05 02:24:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:34.175008 | orchestrator | 2026-01-05 02:24:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:34.175068 | orchestrator | 2026-01-05 02:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:37.224521 | orchestrator | 2026-01-05 02:24:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:37.225629 | orchestrator | 2026-01-05 02:24:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:37.225646 | orchestrator | 2026-01-05 02:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:40.273725 | orchestrator | 2026-01-05 02:24:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:40.275537 | orchestrator | 2026-01-05 02:24:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:40.275612 | orchestrator | 2026-01-05 02:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:43.321411 | orchestrator | 2026-01-05 02:24:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:43.324047 | orchestrator | 2026-01-05 02:24:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:43.324098 | orchestrator | 2026-01-05 02:24:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:46.379802 | orchestrator | 2026-01-05 02:24:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:46.382506 | orchestrator | 2026-01-05 02:24:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:46.382555 | orchestrator | 2026-01-05 02:24:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:49.429989 | orchestrator | 2026-01-05 02:24:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:49.431241 | orchestrator | 2026-01-05 02:24:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:49.431299 | orchestrator | 2026-01-05 02:24:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:52.478981 | orchestrator | 2026-01-05 02:24:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:52.481753 | orchestrator | 2026-01-05 02:24:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:52.481847 | orchestrator | 2026-01-05 02:24:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:55.527304 | orchestrator | 2026-01-05 02:24:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:55.528823 | orchestrator | 2026-01-05 02:24:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:55.528950 | orchestrator | 2026-01-05 02:24:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:24:58.574399 | orchestrator | 2026-01-05 02:24:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:24:58.575328 | orchestrator | 2026-01-05 02:24:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:24:58.575370 | orchestrator | 2026-01-05 02:24:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:01.629765 | orchestrator | 2026-01-05 02:25:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:01.631502 | orchestrator | 2026-01-05 02:25:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:01.631570 | orchestrator | 2026-01-05 02:25:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:04.682617 | orchestrator | 2026-01-05 02:25:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:04.685115 | orchestrator | 2026-01-05 02:25:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:04.685166 | orchestrator | 2026-01-05 02:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:07.730925 | orchestrator | 2026-01-05 02:25:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:07.732066 | orchestrator | 2026-01-05 02:25:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:07.732105 | orchestrator | 2026-01-05 02:25:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:10.785425 | orchestrator | 2026-01-05 02:25:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:10.792377 | orchestrator | 2026-01-05 02:25:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:10.792549 | orchestrator | 2026-01-05 02:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:13.843901 | orchestrator | 2026-01-05 02:25:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:13.847209 | orchestrator | 2026-01-05 02:25:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:13.847263 | orchestrator | 2026-01-05 02:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:16.896464 | orchestrator | 2026-01-05 02:25:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:16.898003 | orchestrator | 2026-01-05 02:25:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:16.898104 | orchestrator | 2026-01-05 02:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:19.943396 | orchestrator | 2026-01-05 02:25:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:19.945564 | orchestrator | 2026-01-05 02:25:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:19.945607 | orchestrator | 2026-01-05 02:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:22.993432 | orchestrator | 2026-01-05 02:25:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:22.995974 | orchestrator | 2026-01-05 02:25:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:22.996056 | orchestrator | 2026-01-05 02:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:26.052331 | orchestrator | 2026-01-05 02:25:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:26.053732 | orchestrator | 2026-01-05 02:25:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:26.053752 | orchestrator | 2026-01-05 02:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:29.099438 | orchestrator | 2026-01-05 02:25:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:29.100026 | orchestrator | 2026-01-05 02:25:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:29.100054 | orchestrator | 2026-01-05 02:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:32.150358 | orchestrator | 2026-01-05 02:25:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:32.152402 | orchestrator | 2026-01-05 02:25:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:32.152486 | orchestrator | 2026-01-05 02:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:35.195279 | orchestrator | 2026-01-05 02:25:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:35.198480 | orchestrator | 2026-01-05 02:25:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:35.198563 | orchestrator | 2026-01-05 02:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:38.241349 | orchestrator | 2026-01-05 02:25:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:38.243677 | orchestrator | 2026-01-05 02:25:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:38.243730 | orchestrator | 2026-01-05 02:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:41.291514 | orchestrator | 2026-01-05 02:25:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:41.293185 | orchestrator | 2026-01-05 02:25:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:41.293246 | orchestrator | 2026-01-05 02:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:44.351240 | orchestrator | 2026-01-05 02:25:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:44.353345 | orchestrator | 2026-01-05 02:25:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:44.353424 | orchestrator | 2026-01-05 02:25:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:47.404817 | orchestrator | 2026-01-05 02:25:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:47.407616 | orchestrator | 2026-01-05 02:25:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:47.407674 | orchestrator | 2026-01-05 02:25:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:50.459322 | orchestrator | 2026-01-05 02:25:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:50.462209 | orchestrator | 2026-01-05 02:25:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:50.462260 | orchestrator | 2026-01-05 02:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:53.514095 | orchestrator | 2026-01-05 02:25:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:53.515684 | orchestrator | 2026-01-05 02:25:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:53.515734 | orchestrator | 2026-01-05 02:25:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:56.561406 | orchestrator | 2026-01-05 02:25:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:56.565537 | orchestrator | 2026-01-05 02:25:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:56.565636 | orchestrator | 2026-01-05 02:25:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:25:59.618349 | orchestrator | 2026-01-05 02:25:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:25:59.620211 | orchestrator | 2026-01-05 02:25:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:25:59.620270 | orchestrator | 2026-01-05 02:25:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:02.669372 | orchestrator | 2026-01-05 02:26:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:02.669476 | orchestrator | 2026-01-05 02:26:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:02.669486 | orchestrator | 2026-01-05 02:26:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:05.720311 | orchestrator | 2026-01-05 02:26:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:05.722773 | orchestrator | 2026-01-05 02:26:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:05.722838 | orchestrator | 2026-01-05 02:26:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:08.771416 | orchestrator | 2026-01-05 02:26:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:08.773093 | orchestrator | 2026-01-05 02:26:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:08.773164 | orchestrator | 2026-01-05 02:26:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:11.819449 | orchestrator | 2026-01-05 02:26:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:11.821692 | orchestrator | 2026-01-05 02:26:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:11.821845 | orchestrator | 2026-01-05 02:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:14.870593 | orchestrator | 2026-01-05 02:26:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:14.873056 | orchestrator | 2026-01-05 02:26:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:14.873148 | orchestrator | 2026-01-05 02:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:17.926347 | orchestrator | 2026-01-05 02:26:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:17.927758 | orchestrator | 2026-01-05 02:26:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:17.927793 | orchestrator | 2026-01-05 02:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:20.978841 | orchestrator | 2026-01-05 02:26:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:20.980433 | orchestrator | 2026-01-05 02:26:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:20.980492 | orchestrator | 2026-01-05 02:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:24.033452 | orchestrator | 2026-01-05 02:26:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:24.036464 | orchestrator | 2026-01-05 02:26:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:24.037729 | orchestrator | 2026-01-05 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:27.081459 | orchestrator | 2026-01-05 02:26:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:27.083839 | orchestrator | 2026-01-05 02:26:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:27.083890 | orchestrator | 2026-01-05 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:30.128998 | orchestrator | 2026-01-05 02:26:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:30.130510 | orchestrator | 2026-01-05 02:26:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:30.130564 | orchestrator | 2026-01-05 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:33.176975 | orchestrator | 2026-01-05 02:26:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:33.178861 | orchestrator | 2026-01-05 02:26:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:33.178912 | orchestrator | 2026-01-05 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:36.239361 | orchestrator | 2026-01-05 02:26:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:36.240317 | orchestrator | 2026-01-05 02:26:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:36.240349 | orchestrator | 2026-01-05 02:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:39.282306 | orchestrator | 2026-01-05 02:26:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:39.284499 | orchestrator | 2026-01-05 02:26:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:39.284608 | orchestrator | 2026-01-05 02:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:42.338712 | orchestrator | 2026-01-05 02:26:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:42.340576 | orchestrator | 2026-01-05 02:26:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:42.340652 | orchestrator | 2026-01-05 02:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:45.393758 | orchestrator | 2026-01-05 02:26:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:45.395754 | orchestrator | 2026-01-05 02:26:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:45.395842 | orchestrator | 2026-01-05 02:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:48.447563 | orchestrator | 2026-01-05 02:26:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:48.450299 | orchestrator | 2026-01-05 02:26:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:48.450408 | orchestrator | 2026-01-05 02:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:51.506223 | orchestrator | 2026-01-05 02:26:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:51.508322 | orchestrator | 2026-01-05 02:26:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:51.508392 | orchestrator | 2026-01-05 02:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:54.558765 | orchestrator | 2026-01-05 02:26:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:54.560975 | orchestrator | 2026-01-05 02:26:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:54.561053 | orchestrator | 2026-01-05 02:26:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:26:57.614624 | orchestrator | 2026-01-05 02:26:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:26:57.615620 | orchestrator | 2026-01-05 02:26:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:26:57.615697 | orchestrator | 2026-01-05 02:26:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:00.662614 | orchestrator | 2026-01-05 02:27:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:00.666627 | orchestrator | 2026-01-05 02:27:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:00.666727 | orchestrator | 2026-01-05 02:27:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:03.718263 | orchestrator | 2026-01-05 02:27:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:03.720683 | orchestrator | 2026-01-05 02:27:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:03.720786 | orchestrator | 2026-01-05 02:27:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:06.775128 | orchestrator | 2026-01-05 02:27:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:06.778427 | orchestrator | 2026-01-05 02:27:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:06.778501 | orchestrator | 2026-01-05 02:27:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:09.823492 | orchestrator | 2026-01-05 02:27:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:09.825675 | orchestrator | 2026-01-05 02:27:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:09.825745 | orchestrator | 2026-01-05 02:27:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:12.878796 | orchestrator | 2026-01-05 02:27:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:12.879934 | orchestrator | 2026-01-05 02:27:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:12.879995 | orchestrator | 2026-01-05 02:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:15.926011 | orchestrator | 2026-01-05 02:27:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:15.927576 | orchestrator | 2026-01-05 02:27:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:15.927662 | orchestrator | 2026-01-05 02:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:18.974183 | orchestrator | 2026-01-05 02:27:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:18.975506 | orchestrator | 2026-01-05 02:27:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:18.975607 | orchestrator | 2026-01-05 02:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:22.018332 | orchestrator | 2026-01-05 02:27:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:22.019306 | orchestrator | 2026-01-05 02:27:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:22.019353 | orchestrator | 2026-01-05 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:25.062544 | orchestrator | 2026-01-05 02:27:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:25.065993 | orchestrator | 2026-01-05 02:27:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:25.066205 | orchestrator | 2026-01-05 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:28.119219 | orchestrator | 2026-01-05 02:27:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:28.121087 | orchestrator | 2026-01-05 02:27:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:28.121158 | orchestrator | 2026-01-05 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:31.176761 | orchestrator | 2026-01-05 02:27:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:31.177708 | orchestrator | 2026-01-05 02:27:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:31.177844 | orchestrator | 2026-01-05 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:34.223748 | orchestrator | 2026-01-05 02:27:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:34.224730 | orchestrator | 2026-01-05 02:27:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:34.225031 | orchestrator | 2026-01-05 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:37.276872 | orchestrator | 2026-01-05 02:27:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:37.279128 | orchestrator | 2026-01-05 02:27:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:37.279190 | orchestrator | 2026-01-05 02:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:40.329853 | orchestrator | 2026-01-05 02:27:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:40.330500 | orchestrator | 2026-01-05 02:27:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:40.330550 | orchestrator | 2026-01-05 02:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:43.383904 | orchestrator | 2026-01-05 02:27:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:43.386624 | orchestrator | 2026-01-05 02:27:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:43.386698 | orchestrator | 2026-01-05 02:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:46.444709 | orchestrator | 2026-01-05 02:27:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:46.446861 | orchestrator | 2026-01-05 02:27:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:46.446947 | orchestrator | 2026-01-05 02:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:49.499267 | orchestrator | 2026-01-05 02:27:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:49.501322 | orchestrator | 2026-01-05 02:27:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:49.501440 | orchestrator | 2026-01-05 02:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:52.550863 | orchestrator | 2026-01-05 02:27:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:52.552004 | orchestrator | 2026-01-05 02:27:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:52.552051 | orchestrator | 2026-01-05 02:27:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:55.603783 | orchestrator | 2026-01-05 02:27:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:55.606073 | orchestrator | 2026-01-05 02:27:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:55.606126 | orchestrator | 2026-01-05 02:27:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:27:58.658327 | orchestrator | 2026-01-05 02:27:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:27:58.659845 | orchestrator | 2026-01-05 02:27:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:27:58.659892 | orchestrator | 2026-01-05 02:27:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:01.713134 | orchestrator | 2026-01-05 02:28:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:01.714362 | orchestrator | 2026-01-05 02:28:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:01.714580 | orchestrator | 2026-01-05 02:28:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:04.767596 | orchestrator | 2026-01-05 02:28:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:04.768605 | orchestrator | 2026-01-05 02:28:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:04.768653 | orchestrator | 2026-01-05 02:28:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:07.816071 | orchestrator | 2026-01-05 02:28:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:07.818966 | orchestrator | 2026-01-05 02:28:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:07.819036 | orchestrator | 2026-01-05 02:28:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:10.871969 | orchestrator | 2026-01-05 02:28:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:10.874669 | orchestrator | 2026-01-05 02:28:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:10.874763 | orchestrator | 2026-01-05 02:28:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:13.928831 | orchestrator | 2026-01-05 02:28:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:13.930333 | orchestrator | 2026-01-05 02:28:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:13.930398 | orchestrator | 2026-01-05 02:28:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:16.973833 | orchestrator | 2026-01-05 02:28:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:16.974974 | orchestrator | 2026-01-05 02:28:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:16.975037 | orchestrator | 2026-01-05 02:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:20.031916 | orchestrator | 2026-01-05 02:28:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:20.034544 | orchestrator | 2026-01-05 02:28:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:20.034691 | orchestrator | 2026-01-05 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:23.077822 | orchestrator | 2026-01-05 02:28:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:23.079937 | orchestrator | 2026-01-05 02:28:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:23.080032 | orchestrator | 2026-01-05 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:26.137783 | orchestrator | 2026-01-05 02:28:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:26.140113 | orchestrator | 2026-01-05 02:28:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:26.140203 | orchestrator | 2026-01-05 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:29.196823 | orchestrator | 2026-01-05 02:28:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:29.199482 | orchestrator | 2026-01-05 02:28:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:29.199640 | orchestrator | 2026-01-05 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:32.261256 | orchestrator | 2026-01-05 02:28:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:32.263735 | orchestrator | 2026-01-05 02:28:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:32.263825 | orchestrator | 2026-01-05 02:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:35.310462 | orchestrator | 2026-01-05 02:28:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:35.311931 | orchestrator | 2026-01-05 02:28:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:35.312016 | orchestrator | 2026-01-05 02:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:38.365140 | orchestrator | 2026-01-05 02:28:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:38.367777 | orchestrator | 2026-01-05 02:28:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:38.367867 | orchestrator | 2026-01-05 02:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:41.424379 | orchestrator | 2026-01-05 02:28:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:41.427638 | orchestrator | 2026-01-05 02:28:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:41.427795 | orchestrator | 2026-01-05 02:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:44.480950 | orchestrator | 2026-01-05 02:28:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:44.481205 | orchestrator | 2026-01-05 02:28:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:44.481224 | orchestrator | 2026-01-05 02:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:47.535268 | orchestrator | 2026-01-05 02:28:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:47.538000 | orchestrator | 2026-01-05 02:28:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:47.538184 | orchestrator | 2026-01-05 02:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:50.589245 | orchestrator | 2026-01-05 02:28:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:50.593039 | orchestrator | 2026-01-05 02:28:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:50.593123 | orchestrator | 2026-01-05 02:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:53.645036 | orchestrator | 2026-01-05 02:28:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:53.646857 | orchestrator | 2026-01-05 02:28:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:53.647085 | orchestrator | 2026-01-05 02:28:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:56.696083 | orchestrator | 2026-01-05 02:28:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:56.697096 | orchestrator | 2026-01-05 02:28:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:56.697146 | orchestrator | 2026-01-05 02:28:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:28:59.748840 | orchestrator | 2026-01-05 02:28:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:28:59.750926 | orchestrator | 2026-01-05 02:28:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:28:59.750983 | orchestrator | 2026-01-05 02:28:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:02.808950 | orchestrator | 2026-01-05 02:29:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:02.811803 | orchestrator | 2026-01-05 02:29:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:02.811873 | orchestrator | 2026-01-05 02:29:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:05.868093 | orchestrator | 2026-01-05 02:29:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:05.869555 | orchestrator | 2026-01-05 02:29:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:05.869588 | orchestrator | 2026-01-05 02:29:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:08.918440 | orchestrator | 2026-01-05 02:29:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:08.918777 | orchestrator | 2026-01-05 02:29:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:08.919400 | orchestrator | 2026-01-05 02:29:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:11.974934 | orchestrator | 2026-01-05 02:29:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:11.977317 | orchestrator | 2026-01-05 02:29:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:11.977396 | orchestrator | 2026-01-05 02:29:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:15.028851 | orchestrator | 2026-01-05 02:29:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:15.030772 | orchestrator | 2026-01-05 02:29:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:15.031044 | orchestrator | 2026-01-05 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:18.077031 | orchestrator | 2026-01-05 02:29:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:18.079337 | orchestrator | 2026-01-05 02:29:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:18.079417 | orchestrator | 2026-01-05 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:21.126594 | orchestrator | 2026-01-05 02:29:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:21.128540 | orchestrator | 2026-01-05 02:29:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:21.128601 | orchestrator | 2026-01-05 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:24.171039 | orchestrator | 2026-01-05 02:29:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:24.172524 | orchestrator | 2026-01-05 02:29:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:24.172663 | orchestrator | 2026-01-05 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:27.231596 | orchestrator | 2026-01-05 02:29:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:27.234157 | orchestrator | 2026-01-05 02:29:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:27.234232 | orchestrator | 2026-01-05 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:30.284195 | orchestrator | 2026-01-05 02:29:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:30.286075 | orchestrator | 2026-01-05 02:29:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:30.286132 | orchestrator | 2026-01-05 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:33.336471 | orchestrator | 2026-01-05 02:29:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:33.338310 | orchestrator | 2026-01-05 02:29:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:33.338378 | orchestrator | 2026-01-05 02:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:36.384550 | orchestrator | 2026-01-05 02:29:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:36.386359 | orchestrator | 2026-01-05 02:29:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:36.386412 | orchestrator | 2026-01-05 02:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:39.434499 | orchestrator | 2026-01-05 02:29:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:39.436095 | orchestrator | 2026-01-05 02:29:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:39.436156 | orchestrator | 2026-01-05 02:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:42.486617 | orchestrator | 2026-01-05 02:29:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:42.487974 | orchestrator | 2026-01-05 02:29:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:42.488010 | orchestrator | 2026-01-05 02:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:45.537112 | orchestrator | 2026-01-05 02:29:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:45.538111 | orchestrator | 2026-01-05 02:29:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:45.538153 | orchestrator | 2026-01-05 02:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:48.593822 | orchestrator | 2026-01-05 02:29:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:48.596454 | orchestrator | 2026-01-05 02:29:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:48.596644 | orchestrator | 2026-01-05 02:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:51.639985 | orchestrator | 2026-01-05 02:29:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:51.641348 | orchestrator | 2026-01-05 02:29:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:51.641415 | orchestrator | 2026-01-05 02:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:54.689000 | orchestrator | 2026-01-05 02:29:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:54.692260 | orchestrator | 2026-01-05 02:29:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:54.692334 | orchestrator | 2026-01-05 02:29:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:29:57.748378 | orchestrator | 2026-01-05 02:29:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:29:57.750503 | orchestrator | 2026-01-05 02:29:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:29:57.750715 | orchestrator | 2026-01-05 02:29:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:00.796839 | orchestrator | 2026-01-05 02:30:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:00.798487 | orchestrator | 2026-01-05 02:30:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:00.798520 | orchestrator | 2026-01-05 02:30:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:03.847335 | orchestrator | 2026-01-05 02:30:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:03.849086 | orchestrator | 2026-01-05 02:30:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:03.849139 | orchestrator | 2026-01-05 02:30:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:06.901270 | orchestrator | 2026-01-05 02:30:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:06.902475 | orchestrator | 2026-01-05 02:30:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:06.902555 | orchestrator | 2026-01-05 02:30:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:09.950497 | orchestrator | 2026-01-05 02:30:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:09.952733 | orchestrator | 2026-01-05 02:30:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:09.952964 | orchestrator | 2026-01-05 02:30:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:12.998773 | orchestrator | 2026-01-05 02:30:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:13.002361 | orchestrator | 2026-01-05 02:30:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:13.002450 | orchestrator | 2026-01-05 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:16.051466 | orchestrator | 2026-01-05 02:30:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:16.053663 | orchestrator | 2026-01-05 02:30:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:16.053733 | orchestrator | 2026-01-05 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:19.097583 | orchestrator | 2026-01-05 02:30:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:19.098219 | orchestrator | 2026-01-05 02:30:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:19.098264 | orchestrator | 2026-01-05 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:22.149791 | orchestrator | 2026-01-05 02:30:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:22.151563 | orchestrator | 2026-01-05 02:30:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:22.151601 | orchestrator | 2026-01-05 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:25.197058 | orchestrator | 2026-01-05 02:30:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:25.199266 | orchestrator | 2026-01-05 02:30:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:25.199694 | orchestrator | 2026-01-05 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:28.257655 | orchestrator | 2026-01-05 02:30:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:28.258721 | orchestrator | 2026-01-05 02:30:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:28.258753 | orchestrator | 2026-01-05 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:31.313911 | orchestrator | 2026-01-05 02:30:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:31.316015 | orchestrator | 2026-01-05 02:30:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:31.316120 | orchestrator | 2026-01-05 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:34.373061 | orchestrator | 2026-01-05 02:30:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:34.375655 | orchestrator | 2026-01-05 02:30:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:34.375729 | orchestrator | 2026-01-05 02:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:37.433963 | orchestrator | 2026-01-05 02:30:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:37.436326 | orchestrator | 2026-01-05 02:30:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:37.436388 | orchestrator | 2026-01-05 02:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:40.488210 | orchestrator | 2026-01-05 02:30:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:40.490299 | orchestrator | 2026-01-05 02:30:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:40.490438 | orchestrator | 2026-01-05 02:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:43.544327 | orchestrator | 2026-01-05 02:30:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:43.545342 | orchestrator | 2026-01-05 02:30:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:43.545412 | orchestrator | 2026-01-05 02:30:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:46.590205 | orchestrator | 2026-01-05 02:30:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:46.591863 | orchestrator | 2026-01-05 02:30:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:46.591951 | orchestrator | 2026-01-05 02:30:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:49.644823 | orchestrator | 2026-01-05 02:30:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:49.647073 | orchestrator | 2026-01-05 02:30:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:49.647120 | orchestrator | 2026-01-05 02:30:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:52.700209 | orchestrator | 2026-01-05 02:30:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:52.702462 | orchestrator | 2026-01-05 02:30:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:52.702618 | orchestrator | 2026-01-05 02:30:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:55.751452 | orchestrator | 2026-01-05 02:30:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:55.752712 | orchestrator | 2026-01-05 02:30:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:55.752818 | orchestrator | 2026-01-05 02:30:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:30:58.805459 | orchestrator | 2026-01-05 02:30:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:30:58.806620 | orchestrator | 2026-01-05 02:30:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:30:58.806689 | orchestrator | 2026-01-05 02:30:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:01.859356 | orchestrator | 2026-01-05 02:31:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:01.861907 | orchestrator | 2026-01-05 02:31:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:01.861998 | orchestrator | 2026-01-05 02:31:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:04.912118 | orchestrator | 2026-01-05 02:31:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:04.914433 | orchestrator | 2026-01-05 02:31:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:04.914500 | orchestrator | 2026-01-05 02:31:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:07.967673 | orchestrator | 2026-01-05 02:31:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:07.969405 | orchestrator | 2026-01-05 02:31:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:07.969459 | orchestrator | 2026-01-05 02:31:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:11.020505 | orchestrator | 2026-01-05 02:31:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:11.024824 | orchestrator | 2026-01-05 02:31:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:11.024921 | orchestrator | 2026-01-05 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:14.072504 | orchestrator | 2026-01-05 02:31:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:14.077586 | orchestrator | 2026-01-05 02:31:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:14.077780 | orchestrator | 2026-01-05 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:17.136584 | orchestrator | 2026-01-05 02:31:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:17.138803 | orchestrator | 2026-01-05 02:31:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:17.138864 | orchestrator | 2026-01-05 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:20.188143 | orchestrator | 2026-01-05 02:31:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:20.190932 | orchestrator | 2026-01-05 02:31:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:20.190989 | orchestrator | 2026-01-05 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:23.239598 | orchestrator | 2026-01-05 02:31:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:23.239774 | orchestrator | 2026-01-05 02:31:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:23.239870 | orchestrator | 2026-01-05 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:26.282474 | orchestrator | 2026-01-05 02:31:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:26.285450 | orchestrator | 2026-01-05 02:31:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:26.285533 | orchestrator | 2026-01-05 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:29.339375 | orchestrator | 2026-01-05 02:31:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:29.341181 | orchestrator | 2026-01-05 02:31:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:29.341242 | orchestrator | 2026-01-05 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:32.399621 | orchestrator | 2026-01-05 02:31:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:32.402245 | orchestrator | 2026-01-05 02:31:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:32.402356 | orchestrator | 2026-01-05 02:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:35.452390 | orchestrator | 2026-01-05 02:31:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:35.454371 | orchestrator | 2026-01-05 02:31:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:35.454463 | orchestrator | 2026-01-05 02:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:38.511445 | orchestrator | 2026-01-05 02:31:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:38.515866 | orchestrator | 2026-01-05 02:31:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:38.515950 | orchestrator | 2026-01-05 02:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:41.569921 | orchestrator | 2026-01-05 02:31:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:41.572632 | orchestrator | 2026-01-05 02:31:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:41.572697 | orchestrator | 2026-01-05 02:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:44.620416 | orchestrator | 2026-01-05 02:31:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:44.622428 | orchestrator | 2026-01-05 02:31:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:44.623251 | orchestrator | 2026-01-05 02:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:47.686518 | orchestrator | 2026-01-05 02:31:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:47.688447 | orchestrator | 2026-01-05 02:31:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:47.688593 | orchestrator | 2026-01-05 02:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:50.745403 | orchestrator | 2026-01-05 02:31:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:50.747963 | orchestrator | 2026-01-05 02:31:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:50.748057 | orchestrator | 2026-01-05 02:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:53.797887 | orchestrator | 2026-01-05 02:31:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:53.802250 | orchestrator | 2026-01-05 02:31:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:53.802335 | orchestrator | 2026-01-05 02:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:56.858632 | orchestrator | 2026-01-05 02:31:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:56.860943 | orchestrator | 2026-01-05 02:31:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:56.861007 | orchestrator | 2026-01-05 02:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:31:59.919249 | orchestrator | 2026-01-05 02:31:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:31:59.920067 | orchestrator | 2026-01-05 02:31:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:31:59.920220 | orchestrator | 2026-01-05 02:31:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:02.964476 | orchestrator | 2026-01-05 02:32:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:02.965171 | orchestrator | 2026-01-05 02:32:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:02.965199 | orchestrator | 2026-01-05 02:32:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:06.014359 | orchestrator | 2026-01-05 02:32:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:06.015609 | orchestrator | 2026-01-05 02:32:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:06.015649 | orchestrator | 2026-01-05 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:09.062592 | orchestrator | 2026-01-05 02:32:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:09.063980 | orchestrator | 2026-01-05 02:32:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:09.064171 | orchestrator | 2026-01-05 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:12.111233 | orchestrator | 2026-01-05 02:32:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:12.112127 | orchestrator | 2026-01-05 02:32:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:12.112208 | orchestrator | 2026-01-05 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:15.162311 | orchestrator | 2026-01-05 02:32:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:15.164700 | orchestrator | 2026-01-05 02:32:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:15.164752 | orchestrator | 2026-01-05 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:18.217412 | orchestrator | 2026-01-05 02:32:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:18.220059 | orchestrator | 2026-01-05 02:32:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:18.220144 | orchestrator | 2026-01-05 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:21.277658 | orchestrator | 2026-01-05 02:32:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:21.279238 | orchestrator | 2026-01-05 02:32:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:21.279304 | orchestrator | 2026-01-05 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:24.328303 | orchestrator | 2026-01-05 02:32:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:24.331014 | orchestrator | 2026-01-05 02:32:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:24.331132 | orchestrator | 2026-01-05 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:27.385030 | orchestrator | 2026-01-05 02:32:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:27.386589 | orchestrator | 2026-01-05 02:32:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:27.386654 | orchestrator | 2026-01-05 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:30.434157 | orchestrator | 2026-01-05 02:32:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:30.434918 | orchestrator | 2026-01-05 02:32:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:30.434973 | orchestrator | 2026-01-05 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:33.487340 | orchestrator | 2026-01-05 02:32:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:33.488441 | orchestrator | 2026-01-05 02:32:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:33.488543 | orchestrator | 2026-01-05 02:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:36.537445 | orchestrator | 2026-01-05 02:32:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:36.538532 | orchestrator | 2026-01-05 02:32:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:36.538560 | orchestrator | 2026-01-05 02:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:39.586728 | orchestrator | 2026-01-05 02:32:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:39.588439 | orchestrator | 2026-01-05 02:32:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:39.588506 | orchestrator | 2026-01-05 02:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:42.641308 | orchestrator | 2026-01-05 02:32:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:42.643171 | orchestrator | 2026-01-05 02:32:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:42.643269 | orchestrator | 2026-01-05 02:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:45.689598 | orchestrator | 2026-01-05 02:32:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:45.690472 | orchestrator | 2026-01-05 02:32:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:45.690522 | orchestrator | 2026-01-05 02:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:48.742159 | orchestrator | 2026-01-05 02:32:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:48.745142 | orchestrator | 2026-01-05 02:32:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:48.745263 | orchestrator | 2026-01-05 02:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:51.792490 | orchestrator | 2026-01-05 02:32:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:51.794264 | orchestrator | 2026-01-05 02:32:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:51.794316 | orchestrator | 2026-01-05 02:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:54.845896 | orchestrator | 2026-01-05 02:32:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:54.846914 | orchestrator | 2026-01-05 02:32:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:54.847463 | orchestrator | 2026-01-05 02:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:32:57.895832 | orchestrator | 2026-01-05 02:32:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:32:57.898361 | orchestrator | 2026-01-05 02:32:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:32:57.898432 | orchestrator | 2026-01-05 02:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:00.947004 | orchestrator | 2026-01-05 02:33:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:00.947909 | orchestrator | 2026-01-05 02:33:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:00.948491 | orchestrator | 2026-01-05 02:33:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:03.997538 | orchestrator | 2026-01-05 02:33:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:03.998865 | orchestrator | 2026-01-05 02:33:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:03.999104 | orchestrator | 2026-01-05 02:33:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:07.051811 | orchestrator | 2026-01-05 02:33:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:07.053652 | orchestrator | 2026-01-05 02:33:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:07.053723 | orchestrator | 2026-01-05 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:10.089786 | orchestrator | 2026-01-05 02:33:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:10.090380 | orchestrator | 2026-01-05 02:33:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:10.090411 | orchestrator | 2026-01-05 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:13.137824 | orchestrator | 2026-01-05 02:33:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:13.139873 | orchestrator | 2026-01-05 02:33:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:13.139911 | orchestrator | 2026-01-05 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:16.192568 | orchestrator | 2026-01-05 02:33:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:16.195940 | orchestrator | 2026-01-05 02:33:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:16.196067 | orchestrator | 2026-01-05 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:19.250273 | orchestrator | 2026-01-05 02:33:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:19.252361 | orchestrator | 2026-01-05 02:33:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:19.252431 | orchestrator | 2026-01-05 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:22.303792 | orchestrator | 2026-01-05 02:33:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:22.306535 | orchestrator | 2026-01-05 02:33:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:22.306653 | orchestrator | 2026-01-05 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:25.358639 | orchestrator | 2026-01-05 02:33:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:25.360497 | orchestrator | 2026-01-05 02:33:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:25.360558 | orchestrator | 2026-01-05 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:28.415605 | orchestrator | 2026-01-05 02:33:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:28.417912 | orchestrator | 2026-01-05 02:33:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:28.417985 | orchestrator | 2026-01-05 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:31.471817 | orchestrator | 2026-01-05 02:33:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:31.475906 | orchestrator | 2026-01-05 02:33:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:31.475982 | orchestrator | 2026-01-05 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:34.533940 | orchestrator | 2026-01-05 02:33:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:34.536993 | orchestrator | 2026-01-05 02:33:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:34.537074 | orchestrator | 2026-01-05 02:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:37.585078 | orchestrator | 2026-01-05 02:33:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:37.587133 | orchestrator | 2026-01-05 02:33:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:37.587232 | orchestrator | 2026-01-05 02:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:40.635019 | orchestrator | 2026-01-05 02:33:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:40.636430 | orchestrator | 2026-01-05 02:33:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:40.636472 | orchestrator | 2026-01-05 02:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:43.689500 | orchestrator | 2026-01-05 02:33:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:43.691143 | orchestrator | 2026-01-05 02:33:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:43.691175 | orchestrator | 2026-01-05 02:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:46.736790 | orchestrator | 2026-01-05 02:33:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:46.738132 | orchestrator | 2026-01-05 02:33:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:46.738209 | orchestrator | 2026-01-05 02:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:49.786328 | orchestrator | 2026-01-05 02:33:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:49.788739 | orchestrator | 2026-01-05 02:33:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:49.788824 | orchestrator | 2026-01-05 02:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:52.832928 | orchestrator | 2026-01-05 02:33:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:52.835173 | orchestrator | 2026-01-05 02:33:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:52.835225 | orchestrator | 2026-01-05 02:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:55.881093 | orchestrator | 2026-01-05 02:33:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:55.881883 | orchestrator | 2026-01-05 02:33:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:55.881908 | orchestrator | 2026-01-05 02:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:33:58.933556 | orchestrator | 2026-01-05 02:33:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:33:58.935086 | orchestrator | 2026-01-05 02:33:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:33:58.935147 | orchestrator | 2026-01-05 02:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:01.989463 | orchestrator | 2026-01-05 02:34:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:01.991856 | orchestrator | 2026-01-05 02:34:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:01.991921 | orchestrator | 2026-01-05 02:34:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:05.041627 | orchestrator | 2026-01-05 02:34:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:05.043562 | orchestrator | 2026-01-05 02:34:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:05.043624 | orchestrator | 2026-01-05 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:08.088194 | orchestrator | 2026-01-05 02:34:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:08.088358 | orchestrator | 2026-01-05 02:34:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:08.088377 | orchestrator | 2026-01-05 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:11.135263 | orchestrator | 2026-01-05 02:34:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:11.137018 | orchestrator | 2026-01-05 02:34:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:11.137088 | orchestrator | 2026-01-05 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:14.185781 | orchestrator | 2026-01-05 02:34:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:14.187196 | orchestrator | 2026-01-05 02:34:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:14.187242 | orchestrator | 2026-01-05 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:17.235479 | orchestrator | 2026-01-05 02:34:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:17.237887 | orchestrator | 2026-01-05 02:34:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:17.237943 | orchestrator | 2026-01-05 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:20.284970 | orchestrator | 2026-01-05 02:34:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:20.285799 | orchestrator | 2026-01-05 02:34:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:20.285831 | orchestrator | 2026-01-05 02:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:23.326243 | orchestrator | 2026-01-05 02:34:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:23.328223 | orchestrator | 2026-01-05 02:34:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:23.328338 | orchestrator | 2026-01-05 02:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:26.368866 | orchestrator | 2026-01-05 02:34:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:26.371011 | orchestrator | 2026-01-05 02:34:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:26.371065 | orchestrator | 2026-01-05 02:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:29.420806 | orchestrator | 2026-01-05 02:34:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:29.422930 | orchestrator | 2026-01-05 02:34:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:29.423202 | orchestrator | 2026-01-05 02:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:32.469177 | orchestrator | 2026-01-05 02:34:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:32.470723 | orchestrator | 2026-01-05 02:34:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:32.470764 | orchestrator | 2026-01-05 02:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:35.530210 | orchestrator | 2026-01-05 02:34:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:35.531482 | orchestrator | 2026-01-05 02:34:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:35.531562 | orchestrator | 2026-01-05 02:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:38.580230 | orchestrator | 2026-01-05 02:34:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:38.582526 | orchestrator | 2026-01-05 02:34:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:38.582597 | orchestrator | 2026-01-05 02:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:41.635584 | orchestrator | 2026-01-05 02:34:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:41.636317 | orchestrator | 2026-01-05 02:34:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:41.636416 | orchestrator | 2026-01-05 02:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:44.690695 | orchestrator | 2026-01-05 02:34:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:44.693813 | orchestrator | 2026-01-05 02:34:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:44.694198 | orchestrator | 2026-01-05 02:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:47.750319 | orchestrator | 2026-01-05 02:34:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:47.751094 | orchestrator | 2026-01-05 02:34:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:47.751249 | orchestrator | 2026-01-05 02:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:50.806812 | orchestrator | 2026-01-05 02:34:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:50.808922 | orchestrator | 2026-01-05 02:34:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:50.809003 | orchestrator | 2026-01-05 02:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:53.859888 | orchestrator | 2026-01-05 02:34:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:53.862293 | orchestrator | 2026-01-05 02:34:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:53.862359 | orchestrator | 2026-01-05 02:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:56.910140 | orchestrator | 2026-01-05 02:34:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:56.913258 | orchestrator | 2026-01-05 02:34:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:56.913322 | orchestrator | 2026-01-05 02:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:34:59.968884 | orchestrator | 2026-01-05 02:34:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:34:59.970937 | orchestrator | 2026-01-05 02:34:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:34:59.970984 | orchestrator | 2026-01-05 02:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:03.025837 | orchestrator | 2026-01-05 02:35:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:03.027983 | orchestrator | 2026-01-05 02:35:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:03.028052 | orchestrator | 2026-01-05 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:06.075224 | orchestrator | 2026-01-05 02:35:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:06.083491 | orchestrator | 2026-01-05 02:35:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:06.083600 | orchestrator | 2026-01-05 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:09.135504 | orchestrator | 2026-01-05 02:35:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:09.137820 | orchestrator | 2026-01-05 02:35:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:09.138115 | orchestrator | 2026-01-05 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:12.190602 | orchestrator | 2026-01-05 02:35:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:12.192468 | orchestrator | 2026-01-05 02:35:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:12.192548 | orchestrator | 2026-01-05 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:15.244956 | orchestrator | 2026-01-05 02:35:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:15.247252 | orchestrator | 2026-01-05 02:35:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:15.247324 | orchestrator | 2026-01-05 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:18.297200 | orchestrator | 2026-01-05 02:35:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:18.302685 | orchestrator | 2026-01-05 02:35:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:18.302797 | orchestrator | 2026-01-05 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:21.344392 | orchestrator | 2026-01-05 02:35:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:21.345932 | orchestrator | 2026-01-05 02:35:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:21.346012 | orchestrator | 2026-01-05 02:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:24.397463 | orchestrator | 2026-01-05 02:35:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:24.399858 | orchestrator | 2026-01-05 02:35:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:24.399915 | orchestrator | 2026-01-05 02:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:27.449826 | orchestrator | 2026-01-05 02:35:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:27.451491 | orchestrator | 2026-01-05 02:35:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:27.451593 | orchestrator | 2026-01-05 02:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:30.498418 | orchestrator | 2026-01-05 02:35:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:30.499679 | orchestrator | 2026-01-05 02:35:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:30.499727 | orchestrator | 2026-01-05 02:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:33.547530 | orchestrator | 2026-01-05 02:35:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:33.549484 | orchestrator | 2026-01-05 02:35:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:33.549552 | orchestrator | 2026-01-05 02:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:36.591132 | orchestrator | 2026-01-05 02:35:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:36.593013 | orchestrator | 2026-01-05 02:35:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:36.593070 | orchestrator | 2026-01-05 02:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:39.638518 | orchestrator | 2026-01-05 02:35:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:39.639688 | orchestrator | 2026-01-05 02:35:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:39.639742 | orchestrator | 2026-01-05 02:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:42.691885 | orchestrator | 2026-01-05 02:35:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:42.693491 | orchestrator | 2026-01-05 02:35:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:42.693538 | orchestrator | 2026-01-05 02:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:45.742323 | orchestrator | 2026-01-05 02:35:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:45.745406 | orchestrator | 2026-01-05 02:35:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:45.745466 | orchestrator | 2026-01-05 02:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:48.797227 | orchestrator | 2026-01-05 02:35:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:48.799773 | orchestrator | 2026-01-05 02:35:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:48.799919 | orchestrator | 2026-01-05 02:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:51.852168 | orchestrator | 2026-01-05 02:35:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:51.854565 | orchestrator | 2026-01-05 02:35:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:51.854632 | orchestrator | 2026-01-05 02:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:54.904898 | orchestrator | 2026-01-05 02:35:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:54.906468 | orchestrator | 2026-01-05 02:35:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:54.906527 | orchestrator | 2026-01-05 02:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:35:57.953085 | orchestrator | 2026-01-05 02:35:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:35:57.955208 | orchestrator | 2026-01-05 02:35:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:35:57.955301 | orchestrator | 2026-01-05 02:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:00.996264 | orchestrator | 2026-01-05 02:36:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:00.997681 | orchestrator | 2026-01-05 02:36:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:00.997744 | orchestrator | 2026-01-05 02:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:04.050835 | orchestrator | 2026-01-05 02:36:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:04.053347 | orchestrator | 2026-01-05 02:36:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:04.053407 | orchestrator | 2026-01-05 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:07.097818 | orchestrator | 2026-01-05 02:36:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:07.100004 | orchestrator | 2026-01-05 02:36:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:07.100130 | orchestrator | 2026-01-05 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:10.140310 | orchestrator | 2026-01-05 02:36:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:10.142945 | orchestrator | 2026-01-05 02:36:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:10.143110 | orchestrator | 2026-01-05 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:13.185476 | orchestrator | 2026-01-05 02:36:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:13.187237 | orchestrator | 2026-01-05 02:36:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:13.187280 | orchestrator | 2026-01-05 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:16.237703 | orchestrator | 2026-01-05 02:36:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:16.239200 | orchestrator | 2026-01-05 02:36:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:16.239295 | orchestrator | 2026-01-05 02:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:19.287268 | orchestrator | 2026-01-05 02:36:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:19.290353 | orchestrator | 2026-01-05 02:36:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:19.290420 | orchestrator | 2026-01-05 02:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:22.336190 | orchestrator | 2026-01-05 02:36:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:22.338260 | orchestrator | 2026-01-05 02:36:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:22.338344 | orchestrator | 2026-01-05 02:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:25.393791 | orchestrator | 2026-01-05 02:36:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:25.396939 | orchestrator | 2026-01-05 02:36:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:25.397030 | orchestrator | 2026-01-05 02:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:28.450736 | orchestrator | 2026-01-05 02:36:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:28.455106 | orchestrator | 2026-01-05 02:36:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:28.455183 | orchestrator | 2026-01-05 02:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:31.502205 | orchestrator | 2026-01-05 02:36:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:31.504591 | orchestrator | 2026-01-05 02:36:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:31.505273 | orchestrator | 2026-01-05 02:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:34.564561 | orchestrator | 2026-01-05 02:36:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:34.567499 | orchestrator | 2026-01-05 02:36:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:34.567565 | orchestrator | 2026-01-05 02:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:37.619333 | orchestrator | 2026-01-05 02:36:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:37.621501 | orchestrator | 2026-01-05 02:36:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:37.621561 | orchestrator | 2026-01-05 02:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:40.679903 | orchestrator | 2026-01-05 02:36:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:40.681831 | orchestrator | 2026-01-05 02:36:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:40.681967 | orchestrator | 2026-01-05 02:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:43.730445 | orchestrator | 2026-01-05 02:36:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:43.734607 | orchestrator | 2026-01-05 02:36:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:43.734678 | orchestrator | 2026-01-05 02:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:46.785363 | orchestrator | 2026-01-05 02:36:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:46.787460 | orchestrator | 2026-01-05 02:36:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:46.787528 | orchestrator | 2026-01-05 02:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:49.837893 | orchestrator | 2026-01-05 02:36:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:49.842954 | orchestrator | 2026-01-05 02:36:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:49.843004 | orchestrator | 2026-01-05 02:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:52.894933 | orchestrator | 2026-01-05 02:36:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:52.897091 | orchestrator | 2026-01-05 02:36:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:52.897141 | orchestrator | 2026-01-05 02:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:55.952181 | orchestrator | 2026-01-05 02:36:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:55.953964 | orchestrator | 2026-01-05 02:36:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:55.954000 | orchestrator | 2026-01-05 02:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:36:59.020019 | orchestrator | 2026-01-05 02:36:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:36:59.024366 | orchestrator | 2026-01-05 02:36:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:36:59.024539 | orchestrator | 2026-01-05 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:02.065190 | orchestrator | 2026-01-05 02:37:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:02.066418 | orchestrator | 2026-01-05 02:37:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:02.066473 | orchestrator | 2026-01-05 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:05.106378 | orchestrator | 2026-01-05 02:37:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:05.111112 | orchestrator | 2026-01-05 02:37:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:05.111243 | orchestrator | 2026-01-05 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:08.157808 | orchestrator | 2026-01-05 02:37:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:08.159053 | orchestrator | 2026-01-05 02:37:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:08.159103 | orchestrator | 2026-01-05 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:11.215964 | orchestrator | 2026-01-05 02:37:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:11.220500 | orchestrator | 2026-01-05 02:37:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:11.220575 | orchestrator | 2026-01-05 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:14.269168 | orchestrator | 2026-01-05 02:37:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:14.269677 | orchestrator | 2026-01-05 02:37:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:14.269855 | orchestrator | 2026-01-05 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:17.320150 | orchestrator | 2026-01-05 02:37:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:17.321895 | orchestrator | 2026-01-05 02:37:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:17.321976 | orchestrator | 2026-01-05 02:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:20.371856 | orchestrator | 2026-01-05 02:37:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:20.373657 | orchestrator | 2026-01-05 02:37:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:20.373702 | orchestrator | 2026-01-05 02:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:23.429203 | orchestrator | 2026-01-05 02:37:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:23.430324 | orchestrator | 2026-01-05 02:37:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:23.430343 | orchestrator | 2026-01-05 02:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:26.470940 | orchestrator | 2026-01-05 02:37:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:26.472627 | orchestrator | 2026-01-05 02:37:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:26.472697 | orchestrator | 2026-01-05 02:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:29.515241 | orchestrator | 2026-01-05 02:37:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:29.516499 | orchestrator | 2026-01-05 02:37:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:29.516575 | orchestrator | 2026-01-05 02:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:32.562404 | orchestrator | 2026-01-05 02:37:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:32.564004 | orchestrator | 2026-01-05 02:37:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:32.564050 | orchestrator | 2026-01-05 02:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:35.612725 | orchestrator | 2026-01-05 02:37:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:35.615375 | orchestrator | 2026-01-05 02:37:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:35.615431 | orchestrator | 2026-01-05 02:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:38.661445 | orchestrator | 2026-01-05 02:37:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:38.663265 | orchestrator | 2026-01-05 02:37:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:38.663401 | orchestrator | 2026-01-05 02:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:41.716995 | orchestrator | 2026-01-05 02:37:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:41.718361 | orchestrator | 2026-01-05 02:37:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:41.718462 | orchestrator | 2026-01-05 02:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:44.760824 | orchestrator | 2026-01-05 02:37:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:44.761939 | orchestrator | 2026-01-05 02:37:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:44.762057 | orchestrator | 2026-01-05 02:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:47.815486 | orchestrator | 2026-01-05 02:37:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:47.817147 | orchestrator | 2026-01-05 02:37:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:47.817196 | orchestrator | 2026-01-05 02:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:50.868205 | orchestrator | 2026-01-05 02:37:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:50.870124 | orchestrator | 2026-01-05 02:37:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:50.870163 | orchestrator | 2026-01-05 02:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:53.919579 | orchestrator | 2026-01-05 02:37:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:53.921514 | orchestrator | 2026-01-05 02:37:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:53.921627 | orchestrator | 2026-01-05 02:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:37:56.973668 | orchestrator | 2026-01-05 02:37:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:37:56.974687 | orchestrator | 2026-01-05 02:37:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:37:56.974729 | orchestrator | 2026-01-05 02:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:00.033787 | orchestrator | 2026-01-05 02:38:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:00.034641 | orchestrator | 2026-01-05 02:38:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:00.034682 | orchestrator | 2026-01-05 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:03.083849 | orchestrator | 2026-01-05 02:38:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:03.085921 | orchestrator | 2026-01-05 02:38:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:03.086053 | orchestrator | 2026-01-05 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:06.134541 | orchestrator | 2026-01-05 02:38:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:06.136561 | orchestrator | 2026-01-05 02:38:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:06.136660 | orchestrator | 2026-01-05 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:09.182114 | orchestrator | 2026-01-05 02:38:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:09.184011 | orchestrator | 2026-01-05 02:38:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:09.184055 | orchestrator | 2026-01-05 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:12.241063 | orchestrator | 2026-01-05 02:38:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:12.242476 | orchestrator | 2026-01-05 02:38:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:12.242537 | orchestrator | 2026-01-05 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:15.288363 | orchestrator | 2026-01-05 02:38:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:15.289542 | orchestrator | 2026-01-05 02:38:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:15.289608 | orchestrator | 2026-01-05 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:18.335052 | orchestrator | 2026-01-05 02:38:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:18.336975 | orchestrator | 2026-01-05 02:38:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:18.337814 | orchestrator | 2026-01-05 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:21.386431 | orchestrator | 2026-01-05 02:38:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:21.388464 | orchestrator | 2026-01-05 02:38:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:21.388537 | orchestrator | 2026-01-05 02:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:24.427785 | orchestrator | 2026-01-05 02:38:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:24.431644 | orchestrator | 2026-01-05 02:38:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:24.431711 | orchestrator | 2026-01-05 02:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:27.482221 | orchestrator | 2026-01-05 02:38:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:27.484381 | orchestrator | 2026-01-05 02:38:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:27.484440 | orchestrator | 2026-01-05 02:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:30.528348 | orchestrator | 2026-01-05 02:38:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:30.530079 | orchestrator | 2026-01-05 02:38:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:30.530154 | orchestrator | 2026-01-05 02:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:33.576872 | orchestrator | 2026-01-05 02:38:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:33.579803 | orchestrator | 2026-01-05 02:38:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:33.579920 | orchestrator | 2026-01-05 02:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:36.627700 | orchestrator | 2026-01-05 02:38:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:36.629262 | orchestrator | 2026-01-05 02:38:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:36.629404 | orchestrator | 2026-01-05 02:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:39.676557 | orchestrator | 2026-01-05 02:38:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:39.677137 | orchestrator | 2026-01-05 02:38:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:39.677171 | orchestrator | 2026-01-05 02:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:42.721722 | orchestrator | 2026-01-05 02:38:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:42.723168 | orchestrator | 2026-01-05 02:38:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:42.723209 | orchestrator | 2026-01-05 02:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:45.772099 | orchestrator | 2026-01-05 02:38:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:45.774292 | orchestrator | 2026-01-05 02:38:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:45.774369 | orchestrator | 2026-01-05 02:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:48.819073 | orchestrator | 2026-01-05 02:38:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:48.821849 | orchestrator | 2026-01-05 02:38:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:48.822003 | orchestrator | 2026-01-05 02:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:51.868717 | orchestrator | 2026-01-05 02:38:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:51.871182 | orchestrator | 2026-01-05 02:38:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:51.871236 | orchestrator | 2026-01-05 02:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:54.920291 | orchestrator | 2026-01-05 02:38:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:54.922797 | orchestrator | 2026-01-05 02:38:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:54.922857 | orchestrator | 2026-01-05 02:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:38:57.978782 | orchestrator | 2026-01-05 02:38:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:38:57.980425 | orchestrator | 2026-01-05 02:38:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:38:57.980501 | orchestrator | 2026-01-05 02:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:01.029821 | orchestrator | 2026-01-05 02:39:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:01.030993 | orchestrator | 2026-01-05 02:39:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:01.031217 | orchestrator | 2026-01-05 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:04.073564 | orchestrator | 2026-01-05 02:39:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:04.074947 | orchestrator | 2026-01-05 02:39:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:04.074988 | orchestrator | 2026-01-05 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:07.123344 | orchestrator | 2026-01-05 02:39:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:07.124833 | orchestrator | 2026-01-05 02:39:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:07.124976 | orchestrator | 2026-01-05 02:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:10.160558 | orchestrator | 2026-01-05 02:39:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:10.160677 | orchestrator | 2026-01-05 02:39:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:10.160698 | orchestrator | 2026-01-05 02:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:13.212944 | orchestrator | 2026-01-05 02:39:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:13.215337 | orchestrator | 2026-01-05 02:39:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:13.215419 | orchestrator | 2026-01-05 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:16.266142 | orchestrator | 2026-01-05 02:39:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:16.269896 | orchestrator | 2026-01-05 02:39:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:16.270066 | orchestrator | 2026-01-05 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:19.315769 | orchestrator | 2026-01-05 02:39:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:19.319318 | orchestrator | 2026-01-05 02:39:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:19.319393 | orchestrator | 2026-01-05 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:22.368637 | orchestrator | 2026-01-05 02:39:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:22.372710 | orchestrator | 2026-01-05 02:39:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:22.372799 | orchestrator | 2026-01-05 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:25.426167 | orchestrator | 2026-01-05 02:39:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:25.427397 | orchestrator | 2026-01-05 02:39:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:25.427519 | orchestrator | 2026-01-05 02:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:28.484347 | orchestrator | 2026-01-05 02:39:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:28.485976 | orchestrator | 2026-01-05 02:39:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:28.486115 | orchestrator | 2026-01-05 02:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:31.538404 | orchestrator | 2026-01-05 02:39:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:31.540314 | orchestrator | 2026-01-05 02:39:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:31.540378 | orchestrator | 2026-01-05 02:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:34.591756 | orchestrator | 2026-01-05 02:39:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:34.595285 | orchestrator | 2026-01-05 02:39:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:34.595354 | orchestrator | 2026-01-05 02:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:37.649454 | orchestrator | 2026-01-05 02:39:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:37.652800 | orchestrator | 2026-01-05 02:39:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:37.652961 | orchestrator | 2026-01-05 02:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:40.705177 | orchestrator | 2026-01-05 02:39:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:40.707367 | orchestrator | 2026-01-05 02:39:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:40.707968 | orchestrator | 2026-01-05 02:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:43.762273 | orchestrator | 2026-01-05 02:39:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:43.766347 | orchestrator | 2026-01-05 02:39:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:43.766496 | orchestrator | 2026-01-05 02:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:46.818451 | orchestrator | 2026-01-05 02:39:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:46.820395 | orchestrator | 2026-01-05 02:39:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:46.820579 | orchestrator | 2026-01-05 02:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:49.865475 | orchestrator | 2026-01-05 02:39:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:49.867827 | orchestrator | 2026-01-05 02:39:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:49.867884 | orchestrator | 2026-01-05 02:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:52.915029 | orchestrator | 2026-01-05 02:39:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:52.917755 | orchestrator | 2026-01-05 02:39:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:52.917919 | orchestrator | 2026-01-05 02:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:55.958105 | orchestrator | 2026-01-05 02:39:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:55.960179 | orchestrator | 2026-01-05 02:39:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:55.960244 | orchestrator | 2026-01-05 02:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:39:59.017952 | orchestrator | 2026-01-05 02:39:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:39:59.018891 | orchestrator | 2026-01-05 02:39:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:39:59.019007 | orchestrator | 2026-01-05 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:02.064267 | orchestrator | 2026-01-05 02:40:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:02.065085 | orchestrator | 2026-01-05 02:40:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:02.065116 | orchestrator | 2026-01-05 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:05.115731 | orchestrator | 2026-01-05 02:40:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:05.117779 | orchestrator | 2026-01-05 02:40:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:05.117878 | orchestrator | 2026-01-05 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:08.162954 | orchestrator | 2026-01-05 02:40:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:08.163166 | orchestrator | 2026-01-05 02:40:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:08.163186 | orchestrator | 2026-01-05 02:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:11.213440 | orchestrator | 2026-01-05 02:40:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:11.214285 | orchestrator | 2026-01-05 02:40:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:11.214321 | orchestrator | 2026-01-05 02:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:14.265543 | orchestrator | 2026-01-05 02:40:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:14.266682 | orchestrator | 2026-01-05 02:40:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:14.266740 | orchestrator | 2026-01-05 02:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:17.314451 | orchestrator | 2026-01-05 02:40:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:17.316972 | orchestrator | 2026-01-05 02:40:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:17.317065 | orchestrator | 2026-01-05 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:20.367331 | orchestrator | 2026-01-05 02:40:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:20.369378 | orchestrator | 2026-01-05 02:40:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:20.369702 | orchestrator | 2026-01-05 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:23.419198 | orchestrator | 2026-01-05 02:40:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:23.420586 | orchestrator | 2026-01-05 02:40:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:23.420628 | orchestrator | 2026-01-05 02:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:26.478771 | orchestrator | 2026-01-05 02:40:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:26.481244 | orchestrator | 2026-01-05 02:40:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:26.481321 | orchestrator | 2026-01-05 02:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:29.531116 | orchestrator | 2026-01-05 02:40:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:29.532809 | orchestrator | 2026-01-05 02:40:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:29.533042 | orchestrator | 2026-01-05 02:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:32.581207 | orchestrator | 2026-01-05 02:40:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:32.583728 | orchestrator | 2026-01-05 02:40:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:32.583777 | orchestrator | 2026-01-05 02:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:35.635590 | orchestrator | 2026-01-05 02:40:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:35.636598 | orchestrator | 2026-01-05 02:40:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:35.636705 | orchestrator | 2026-01-05 02:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:38.682768 | orchestrator | 2026-01-05 02:40:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:38.684593 | orchestrator | 2026-01-05 02:40:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:38.684653 | orchestrator | 2026-01-05 02:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:41.739758 | orchestrator | 2026-01-05 02:40:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:41.742224 | orchestrator | 2026-01-05 02:40:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:41.742304 | orchestrator | 2026-01-05 02:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:44.795468 | orchestrator | 2026-01-05 02:40:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:44.797345 | orchestrator | 2026-01-05 02:40:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:44.797454 | orchestrator | 2026-01-05 02:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:47.845667 | orchestrator | 2026-01-05 02:40:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:47.847823 | orchestrator | 2026-01-05 02:40:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:47.847909 | orchestrator | 2026-01-05 02:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:50.900507 | orchestrator | 2026-01-05 02:40:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:50.903450 | orchestrator | 2026-01-05 02:40:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:50.903514 | orchestrator | 2026-01-05 02:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:53.955698 | orchestrator | 2026-01-05 02:40:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:53.959138 | orchestrator | 2026-01-05 02:40:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:53.959229 | orchestrator | 2026-01-05 02:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:40:57.017469 | orchestrator | 2026-01-05 02:40:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:40:57.020671 | orchestrator | 2026-01-05 02:40:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:40:57.020782 | orchestrator | 2026-01-05 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:00.072507 | orchestrator | 2026-01-05 02:41:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:00.073262 | orchestrator | 2026-01-05 02:41:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:00.073304 | orchestrator | 2026-01-05 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:03.118755 | orchestrator | 2026-01-05 02:41:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:03.120708 | orchestrator | 2026-01-05 02:41:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:03.120788 | orchestrator | 2026-01-05 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:06.161420 | orchestrator | 2026-01-05 02:41:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:06.163367 | orchestrator | 2026-01-05 02:41:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:06.163417 | orchestrator | 2026-01-05 02:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:09.209826 | orchestrator | 2026-01-05 02:41:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:09.210716 | orchestrator | 2026-01-05 02:41:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:09.210774 | orchestrator | 2026-01-05 02:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:12.262396 | orchestrator | 2026-01-05 02:41:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:12.266181 | orchestrator | 2026-01-05 02:41:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:12.266288 | orchestrator | 2026-01-05 02:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:15.318998 | orchestrator | 2026-01-05 02:41:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:15.320323 | orchestrator | 2026-01-05 02:41:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:15.320373 | orchestrator | 2026-01-05 02:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:18.382455 | orchestrator | 2026-01-05 02:41:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:18.383954 | orchestrator | 2026-01-05 02:41:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:18.384010 | orchestrator | 2026-01-05 02:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:21.431626 | orchestrator | 2026-01-05 02:41:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:21.435029 | orchestrator | 2026-01-05 02:41:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:21.435228 | orchestrator | 2026-01-05 02:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:24.484507 | orchestrator | 2026-01-05 02:41:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:24.485339 | orchestrator | 2026-01-05 02:41:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:24.485399 | orchestrator | 2026-01-05 02:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:27.543411 | orchestrator | 2026-01-05 02:41:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:27.546864 | orchestrator | 2026-01-05 02:41:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:27.546945 | orchestrator | 2026-01-05 02:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:30.598276 | orchestrator | 2026-01-05 02:41:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:30.599862 | orchestrator | 2026-01-05 02:41:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:30.599917 | orchestrator | 2026-01-05 02:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:33.652417 | orchestrator | 2026-01-05 02:41:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:33.654462 | orchestrator | 2026-01-05 02:41:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:33.654531 | orchestrator | 2026-01-05 02:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:36.704507 | orchestrator | 2026-01-05 02:41:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:36.705797 | orchestrator | 2026-01-05 02:41:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:36.705868 | orchestrator | 2026-01-05 02:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:39.757913 | orchestrator | 2026-01-05 02:41:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:39.760063 | orchestrator | 2026-01-05 02:41:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:39.760274 | orchestrator | 2026-01-05 02:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:42.810369 | orchestrator | 2026-01-05 02:41:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:42.811980 | orchestrator | 2026-01-05 02:41:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:42.812064 | orchestrator | 2026-01-05 02:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:45.864390 | orchestrator | 2026-01-05 02:41:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:45.868477 | orchestrator | 2026-01-05 02:41:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:45.868539 | orchestrator | 2026-01-05 02:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:48.923692 | orchestrator | 2026-01-05 02:41:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:48.925020 | orchestrator | 2026-01-05 02:41:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:48.925058 | orchestrator | 2026-01-05 02:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:51.976178 | orchestrator | 2026-01-05 02:41:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:51.977937 | orchestrator | 2026-01-05 02:41:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:51.978051 | orchestrator | 2026-01-05 02:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:55.039736 | orchestrator | 2026-01-05 02:41:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:55.042069 | orchestrator | 2026-01-05 02:41:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:55.042366 | orchestrator | 2026-01-05 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:41:58.091677 | orchestrator | 2026-01-05 02:41:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:41:58.094228 | orchestrator | 2026-01-05 02:41:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:41:58.094315 | orchestrator | 2026-01-05 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:01.134378 | orchestrator | 2026-01-05 02:42:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:01.136254 | orchestrator | 2026-01-05 02:42:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:01.136680 | orchestrator | 2026-01-05 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:04.182189 | orchestrator | 2026-01-05 02:42:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:04.183621 | orchestrator | 2026-01-05 02:42:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:04.183645 | orchestrator | 2026-01-05 02:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:07.239828 | orchestrator | 2026-01-05 02:42:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:07.240866 | orchestrator | 2026-01-05 02:42:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:07.241020 | orchestrator | 2026-01-05 02:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:10.288177 | orchestrator | 2026-01-05 02:42:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:10.288974 | orchestrator | 2026-01-05 02:42:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:10.289011 | orchestrator | 2026-01-05 02:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:13.338321 | orchestrator | 2026-01-05 02:42:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:13.340236 | orchestrator | 2026-01-05 02:42:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:13.340294 | orchestrator | 2026-01-05 02:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:16.389805 | orchestrator | 2026-01-05 02:42:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:16.391169 | orchestrator | 2026-01-05 02:42:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:16.391364 | orchestrator | 2026-01-05 02:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:19.441664 | orchestrator | 2026-01-05 02:42:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:19.443013 | orchestrator | 2026-01-05 02:42:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:19.443059 | orchestrator | 2026-01-05 02:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:22.495521 | orchestrator | 2026-01-05 02:42:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:22.498955 | orchestrator | 2026-01-05 02:42:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:22.499017 | orchestrator | 2026-01-05 02:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:25.544408 | orchestrator | 2026-01-05 02:42:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:25.544686 | orchestrator | 2026-01-05 02:42:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:25.544712 | orchestrator | 2026-01-05 02:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:28.593534 | orchestrator | 2026-01-05 02:42:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:28.594394 | orchestrator | 2026-01-05 02:42:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:28.594419 | orchestrator | 2026-01-05 02:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:31.640629 | orchestrator | 2026-01-05 02:42:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:31.642347 | orchestrator | 2026-01-05 02:42:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:31.642449 | orchestrator | 2026-01-05 02:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:34.697634 | orchestrator | 2026-01-05 02:42:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:34.699103 | orchestrator | 2026-01-05 02:42:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:34.699191 | orchestrator | 2026-01-05 02:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:37.741806 | orchestrator | 2026-01-05 02:42:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:37.742928 | orchestrator | 2026-01-05 02:42:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:37.743026 | orchestrator | 2026-01-05 02:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:40.790687 | orchestrator | 2026-01-05 02:42:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:40.792232 | orchestrator | 2026-01-05 02:42:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:40.792296 | orchestrator | 2026-01-05 02:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:43.839011 | orchestrator | 2026-01-05 02:42:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:43.839913 | orchestrator | 2026-01-05 02:42:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:43.839960 | orchestrator | 2026-01-05 02:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:46.882257 | orchestrator | 2026-01-05 02:42:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:46.884381 | orchestrator | 2026-01-05 02:42:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:46.884452 | orchestrator | 2026-01-05 02:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:49.937799 | orchestrator | 2026-01-05 02:42:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:49.939451 | orchestrator | 2026-01-05 02:42:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:49.939665 | orchestrator | 2026-01-05 02:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:52.985793 | orchestrator | 2026-01-05 02:42:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:52.988061 | orchestrator | 2026-01-05 02:42:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:52.988098 | orchestrator | 2026-01-05 02:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:56.044497 | orchestrator | 2026-01-05 02:42:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:56.046003 | orchestrator | 2026-01-05 02:42:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:56.046090 | orchestrator | 2026-01-05 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:42:59.092335 | orchestrator | 2026-01-05 02:42:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:42:59.093721 | orchestrator | 2026-01-05 02:42:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:42:59.093762 | orchestrator | 2026-01-05 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:02.139353 | orchestrator | 2026-01-05 02:43:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:02.140718 | orchestrator | 2026-01-05 02:43:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:02.140769 | orchestrator | 2026-01-05 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:05.199683 | orchestrator | 2026-01-05 02:43:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:05.201714 | orchestrator | 2026-01-05 02:43:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:05.201828 | orchestrator | 2026-01-05 02:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:08.247948 | orchestrator | 2026-01-05 02:43:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:08.249801 | orchestrator | 2026-01-05 02:43:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:08.249999 | orchestrator | 2026-01-05 02:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:11.306226 | orchestrator | 2026-01-05 02:43:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:11.307607 | orchestrator | 2026-01-05 02:43:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:11.307682 | orchestrator | 2026-01-05 02:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:14.364037 | orchestrator | 2026-01-05 02:43:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:14.367028 | orchestrator | 2026-01-05 02:43:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:14.367158 | orchestrator | 2026-01-05 02:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:17.416816 | orchestrator | 2026-01-05 02:43:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:17.419824 | orchestrator | 2026-01-05 02:43:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:17.420246 | orchestrator | 2026-01-05 02:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:20.476881 | orchestrator | 2026-01-05 02:43:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:20.478972 | orchestrator | 2026-01-05 02:43:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:20.479807 | orchestrator | 2026-01-05 02:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:23.521264 | orchestrator | 2026-01-05 02:43:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:23.523021 | orchestrator | 2026-01-05 02:43:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:23.523053 | orchestrator | 2026-01-05 02:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:26.563040 | orchestrator | 2026-01-05 02:43:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:26.564484 | orchestrator | 2026-01-05 02:43:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:26.564545 | orchestrator | 2026-01-05 02:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:29.605732 | orchestrator | 2026-01-05 02:43:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:29.607951 | orchestrator | 2026-01-05 02:43:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:29.608000 | orchestrator | 2026-01-05 02:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:32.653942 | orchestrator | 2026-01-05 02:43:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:32.656609 | orchestrator | 2026-01-05 02:43:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:32.656700 | orchestrator | 2026-01-05 02:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:35.709815 | orchestrator | 2026-01-05 02:43:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:35.711998 | orchestrator | 2026-01-05 02:43:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:35.712322 | orchestrator | 2026-01-05 02:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:38.761723 | orchestrator | 2026-01-05 02:43:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:38.762661 | orchestrator | 2026-01-05 02:43:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:38.762710 | orchestrator | 2026-01-05 02:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:41.810870 | orchestrator | 2026-01-05 02:43:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:41.812006 | orchestrator | 2026-01-05 02:43:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:41.812057 | orchestrator | 2026-01-05 02:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:44.861068 | orchestrator | 2026-01-05 02:43:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:44.863446 | orchestrator | 2026-01-05 02:43:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:44.863529 | orchestrator | 2026-01-05 02:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:47.918417 | orchestrator | 2026-01-05 02:43:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:47.921142 | orchestrator | 2026-01-05 02:43:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:47.921189 | orchestrator | 2026-01-05 02:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:50.971101 | orchestrator | 2026-01-05 02:43:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:50.972648 | orchestrator | 2026-01-05 02:43:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:50.972707 | orchestrator | 2026-01-05 02:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:54.014420 | orchestrator | 2026-01-05 02:43:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:54.016029 | orchestrator | 2026-01-05 02:43:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:54.016137 | orchestrator | 2026-01-05 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:43:57.057152 | orchestrator | 2026-01-05 02:43:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:43:57.057923 | orchestrator | 2026-01-05 02:43:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:43:57.057978 | orchestrator | 2026-01-05 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:00.102385 | orchestrator | 2026-01-05 02:44:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:00.104430 | orchestrator | 2026-01-05 02:44:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:00.104495 | orchestrator | 2026-01-05 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:03.147249 | orchestrator | 2026-01-05 02:44:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:03.149604 | orchestrator | 2026-01-05 02:44:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:03.149655 | orchestrator | 2026-01-05 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:06.196075 | orchestrator | 2026-01-05 02:44:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:06.196992 | orchestrator | 2026-01-05 02:44:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:06.197020 | orchestrator | 2026-01-05 02:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:09.244911 | orchestrator | 2026-01-05 02:44:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:09.247303 | orchestrator | 2026-01-05 02:44:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:09.247357 | orchestrator | 2026-01-05 02:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:12.301656 | orchestrator | 2026-01-05 02:44:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:12.303699 | orchestrator | 2026-01-05 02:44:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:12.303845 | orchestrator | 2026-01-05 02:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:15.355930 | orchestrator | 2026-01-05 02:44:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:15.357670 | orchestrator | 2026-01-05 02:44:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:15.357718 | orchestrator | 2026-01-05 02:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:18.402968 | orchestrator | 2026-01-05 02:44:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:18.406816 | orchestrator | 2026-01-05 02:44:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:18.406892 | orchestrator | 2026-01-05 02:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:21.466420 | orchestrator | 2026-01-05 02:44:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:21.469102 | orchestrator | 2026-01-05 02:44:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:21.469169 | orchestrator | 2026-01-05 02:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:24.523717 | orchestrator | 2026-01-05 02:44:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:24.526163 | orchestrator | 2026-01-05 02:44:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:24.526319 | orchestrator | 2026-01-05 02:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:27.576596 | orchestrator | 2026-01-05 02:44:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:27.580142 | orchestrator | 2026-01-05 02:44:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:27.580209 | orchestrator | 2026-01-05 02:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:30.636121 | orchestrator | 2026-01-05 02:44:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:30.638846 | orchestrator | 2026-01-05 02:44:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:30.638954 | orchestrator | 2026-01-05 02:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:33.684486 | orchestrator | 2026-01-05 02:44:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:33.686375 | orchestrator | 2026-01-05 02:44:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:33.686521 | orchestrator | 2026-01-05 02:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:36.743083 | orchestrator | 2026-01-05 02:44:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:36.745498 | orchestrator | 2026-01-05 02:44:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:36.745572 | orchestrator | 2026-01-05 02:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:39.791977 | orchestrator | 2026-01-05 02:44:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:39.794285 | orchestrator | 2026-01-05 02:44:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:39.794400 | orchestrator | 2026-01-05 02:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:42.845927 | orchestrator | 2026-01-05 02:44:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:42.847153 | orchestrator | 2026-01-05 02:44:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:42.847447 | orchestrator | 2026-01-05 02:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:45.894503 | orchestrator | 2026-01-05 02:44:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:45.897152 | orchestrator | 2026-01-05 02:44:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:45.897206 | orchestrator | 2026-01-05 02:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:48.940939 | orchestrator | 2026-01-05 02:44:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:48.941751 | orchestrator | 2026-01-05 02:44:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:48.941825 | orchestrator | 2026-01-05 02:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:51.990938 | orchestrator | 2026-01-05 02:44:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:51.993826 | orchestrator | 2026-01-05 02:44:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:51.993864 | orchestrator | 2026-01-05 02:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:55.047460 | orchestrator | 2026-01-05 02:44:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:55.048841 | orchestrator | 2026-01-05 02:44:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:55.048932 | orchestrator | 2026-01-05 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:44:58.102449 | orchestrator | 2026-01-05 02:44:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:44:58.104175 | orchestrator | 2026-01-05 02:44:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:44:58.104233 | orchestrator | 2026-01-05 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:01.148454 | orchestrator | 2026-01-05 02:45:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:01.152048 | orchestrator | 2026-01-05 02:45:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:01.152138 | orchestrator | 2026-01-05 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:04.198411 | orchestrator | 2026-01-05 02:45:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:04.201920 | orchestrator | 2026-01-05 02:45:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:04.201992 | orchestrator | 2026-01-05 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:07.248338 | orchestrator | 2026-01-05 02:45:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:07.249604 | orchestrator | 2026-01-05 02:45:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:07.249655 | orchestrator | 2026-01-05 02:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:10.301481 | orchestrator | 2026-01-05 02:45:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:10.302199 | orchestrator | 2026-01-05 02:45:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:10.302237 | orchestrator | 2026-01-05 02:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:13.353241 | orchestrator | 2026-01-05 02:45:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:13.356835 | orchestrator | 2026-01-05 02:45:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:13.356901 | orchestrator | 2026-01-05 02:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:16.412409 | orchestrator | 2026-01-05 02:45:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:16.413572 | orchestrator | 2026-01-05 02:45:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:16.413631 | orchestrator | 2026-01-05 02:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:19.464800 | orchestrator | 2026-01-05 02:45:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:19.468838 | orchestrator | 2026-01-05 02:45:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:19.469182 | orchestrator | 2026-01-05 02:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:22.514134 | orchestrator | 2026-01-05 02:45:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:22.516506 | orchestrator | 2026-01-05 02:45:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:22.516562 | orchestrator | 2026-01-05 02:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:25.558747 | orchestrator | 2026-01-05 02:45:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:25.560007 | orchestrator | 2026-01-05 02:45:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:25.560064 | orchestrator | 2026-01-05 02:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:28.605221 | orchestrator | 2026-01-05 02:45:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:28.607640 | orchestrator | 2026-01-05 02:45:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:28.607702 | orchestrator | 2026-01-05 02:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:31.650469 | orchestrator | 2026-01-05 02:45:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:31.653569 | orchestrator | 2026-01-05 02:45:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:31.653658 | orchestrator | 2026-01-05 02:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:34.697518 | orchestrator | 2026-01-05 02:45:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:34.700020 | orchestrator | 2026-01-05 02:45:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:34.700134 | orchestrator | 2026-01-05 02:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:37.748756 | orchestrator | 2026-01-05 02:45:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:37.749583 | orchestrator | 2026-01-05 02:45:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:37.749780 | orchestrator | 2026-01-05 02:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:40.800247 | orchestrator | 2026-01-05 02:45:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:40.802199 | orchestrator | 2026-01-05 02:45:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:40.802306 | orchestrator | 2026-01-05 02:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:43.851650 | orchestrator | 2026-01-05 02:45:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:43.853364 | orchestrator | 2026-01-05 02:45:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:43.853426 | orchestrator | 2026-01-05 02:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:46.900442 | orchestrator | 2026-01-05 02:45:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:46.901666 | orchestrator | 2026-01-05 02:45:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:46.901745 | orchestrator | 2026-01-05 02:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:49.947348 | orchestrator | 2026-01-05 02:45:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:49.948411 | orchestrator | 2026-01-05 02:45:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:49.948465 | orchestrator | 2026-01-05 02:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:53.001459 | orchestrator | 2026-01-05 02:45:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:53.004359 | orchestrator | 2026-01-05 02:45:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:53.004521 | orchestrator | 2026-01-05 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:56.057518 | orchestrator | 2026-01-05 02:45:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:56.059103 | orchestrator | 2026-01-05 02:45:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:56.059154 | orchestrator | 2026-01-05 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:45:59.108648 | orchestrator | 2026-01-05 02:45:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:45:59.109735 | orchestrator | 2026-01-05 02:45:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:45:59.109844 | orchestrator | 2026-01-05 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:02.159994 | orchestrator | 2026-01-05 02:46:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:02.163502 | orchestrator | 2026-01-05 02:46:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:02.163707 | orchestrator | 2026-01-05 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:05.218549 | orchestrator | 2026-01-05 02:46:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:05.219342 | orchestrator | 2026-01-05 02:46:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:05.219391 | orchestrator | 2026-01-05 02:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:08.273572 | orchestrator | 2026-01-05 02:46:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:08.274333 | orchestrator | 2026-01-05 02:46:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:08.274451 | orchestrator | 2026-01-05 02:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:11.327338 | orchestrator | 2026-01-05 02:46:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:11.328698 | orchestrator | 2026-01-05 02:46:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:11.328759 | orchestrator | 2026-01-05 02:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:14.373953 | orchestrator | 2026-01-05 02:46:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:14.377775 | orchestrator | 2026-01-05 02:46:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:14.377864 | orchestrator | 2026-01-05 02:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:17.430402 | orchestrator | 2026-01-05 02:46:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:17.432042 | orchestrator | 2026-01-05 02:46:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:17.432109 | orchestrator | 2026-01-05 02:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:20.482719 | orchestrator | 2026-01-05 02:46:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:20.485945 | orchestrator | 2026-01-05 02:46:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:20.486096 | orchestrator | 2026-01-05 02:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:23.529762 | orchestrator | 2026-01-05 02:46:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:23.532343 | orchestrator | 2026-01-05 02:46:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:23.532401 | orchestrator | 2026-01-05 02:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:26.578156 | orchestrator | 2026-01-05 02:46:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:26.580175 | orchestrator | 2026-01-05 02:46:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:26.580597 | orchestrator | 2026-01-05 02:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:29.631017 | orchestrator | 2026-01-05 02:46:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:29.633795 | orchestrator | 2026-01-05 02:46:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:29.633859 | orchestrator | 2026-01-05 02:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:32.680465 | orchestrator | 2026-01-05 02:46:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:32.681770 | orchestrator | 2026-01-05 02:46:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:32.681963 | orchestrator | 2026-01-05 02:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:35.723856 | orchestrator | 2026-01-05 02:46:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:35.726179 | orchestrator | 2026-01-05 02:46:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:35.726216 | orchestrator | 2026-01-05 02:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:38.782321 | orchestrator | 2026-01-05 02:46:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:38.785247 | orchestrator | 2026-01-05 02:46:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:38.785322 | orchestrator | 2026-01-05 02:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:41.836880 | orchestrator | 2026-01-05 02:46:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:41.838864 | orchestrator | 2026-01-05 02:46:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:41.839189 | orchestrator | 2026-01-05 02:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:44.882385 | orchestrator | 2026-01-05 02:46:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:44.883979 | orchestrator | 2026-01-05 02:46:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:44.884037 | orchestrator | 2026-01-05 02:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:47.936270 | orchestrator | 2026-01-05 02:46:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:47.936795 | orchestrator | 2026-01-05 02:46:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:47.936822 | orchestrator | 2026-01-05 02:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:50.987919 | orchestrator | 2026-01-05 02:46:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:50.990275 | orchestrator | 2026-01-05 02:46:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:50.990379 | orchestrator | 2026-01-05 02:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:54.048170 | orchestrator | 2026-01-05 02:46:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:54.049161 | orchestrator | 2026-01-05 02:46:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:54.049213 | orchestrator | 2026-01-05 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:46:57.108657 | orchestrator | 2026-01-05 02:46:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:46:57.110231 | orchestrator | 2026-01-05 02:46:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:46:57.110316 | orchestrator | 2026-01-05 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:00.157602 | orchestrator | 2026-01-05 02:47:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:00.160229 | orchestrator | 2026-01-05 02:47:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:00.160489 | orchestrator | 2026-01-05 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:03.209987 | orchestrator | 2026-01-05 02:47:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:03.212128 | orchestrator | 2026-01-05 02:47:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:03.212186 | orchestrator | 2026-01-05 02:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:06.254787 | orchestrator | 2026-01-05 02:47:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:06.254914 | orchestrator | 2026-01-05 02:47:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:06.255161 | orchestrator | 2026-01-05 02:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:09.301890 | orchestrator | 2026-01-05 02:47:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:09.303625 | orchestrator | 2026-01-05 02:47:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:09.303664 | orchestrator | 2026-01-05 02:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:12.353823 | orchestrator | 2026-01-05 02:47:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:12.355741 | orchestrator | 2026-01-05 02:47:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:12.355771 | orchestrator | 2026-01-05 02:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:15.403640 | orchestrator | 2026-01-05 02:47:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:15.404884 | orchestrator | 2026-01-05 02:47:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:15.404931 | orchestrator | 2026-01-05 02:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:18.451482 | orchestrator | 2026-01-05 02:47:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:18.454320 | orchestrator | 2026-01-05 02:47:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:18.454508 | orchestrator | 2026-01-05 02:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:21.505220 | orchestrator | 2026-01-05 02:47:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:21.509932 | orchestrator | 2026-01-05 02:47:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:21.510656 | orchestrator | 2026-01-05 02:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:24.560954 | orchestrator | 2026-01-05 02:47:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:24.562701 | orchestrator | 2026-01-05 02:47:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:24.562763 | orchestrator | 2026-01-05 02:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:27.610802 | orchestrator | 2026-01-05 02:47:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:27.612700 | orchestrator | 2026-01-05 02:47:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:27.612764 | orchestrator | 2026-01-05 02:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:30.653980 | orchestrator | 2026-01-05 02:47:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:30.655933 | orchestrator | 2026-01-05 02:47:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:30.656035 | orchestrator | 2026-01-05 02:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:33.704987 | orchestrator | 2026-01-05 02:47:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:33.706333 | orchestrator | 2026-01-05 02:47:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:33.706489 | orchestrator | 2026-01-05 02:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:36.760116 | orchestrator | 2026-01-05 02:47:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:36.763134 | orchestrator | 2026-01-05 02:47:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:36.763238 | orchestrator | 2026-01-05 02:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:39.807598 | orchestrator | 2026-01-05 02:47:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:39.810689 | orchestrator | 2026-01-05 02:47:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:39.810777 | orchestrator | 2026-01-05 02:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:42.863781 | orchestrator | 2026-01-05 02:47:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:42.865412 | orchestrator | 2026-01-05 02:47:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:42.865450 | orchestrator | 2026-01-05 02:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:45.913814 | orchestrator | 2026-01-05 02:47:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:45.916430 | orchestrator | 2026-01-05 02:47:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:45.916517 | orchestrator | 2026-01-05 02:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:48.968142 | orchestrator | 2026-01-05 02:47:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:48.970560 | orchestrator | 2026-01-05 02:47:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:48.970712 | orchestrator | 2026-01-05 02:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:52.016145 | orchestrator | 2026-01-05 02:47:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:52.017132 | orchestrator | 2026-01-05 02:47:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:52.017190 | orchestrator | 2026-01-05 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:55.063115 | orchestrator | 2026-01-05 02:47:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:55.065363 | orchestrator | 2026-01-05 02:47:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:55.065497 | orchestrator | 2026-01-05 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:47:58.110880 | orchestrator | 2026-01-05 02:47:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:47:58.112627 | orchestrator | 2026-01-05 02:47:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:47:58.112665 | orchestrator | 2026-01-05 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:01.160676 | orchestrator | 2026-01-05 02:48:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:01.164867 | orchestrator | 2026-01-05 02:48:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:01.164944 | orchestrator | 2026-01-05 02:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:04.218069 | orchestrator | 2026-01-05 02:48:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:04.220294 | orchestrator | 2026-01-05 02:48:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:04.220363 | orchestrator | 2026-01-05 02:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:07.266216 | orchestrator | 2026-01-05 02:48:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:07.267815 | orchestrator | 2026-01-05 02:48:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:07.267869 | orchestrator | 2026-01-05 02:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:10.316871 | orchestrator | 2026-01-05 02:48:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:10.318001 | orchestrator | 2026-01-05 02:48:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:10.318109 | orchestrator | 2026-01-05 02:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:13.366449 | orchestrator | 2026-01-05 02:48:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:13.369305 | orchestrator | 2026-01-05 02:48:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:13.369433 | orchestrator | 2026-01-05 02:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:16.422128 | orchestrator | 2026-01-05 02:48:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:16.424527 | orchestrator | 2026-01-05 02:48:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:16.424602 | orchestrator | 2026-01-05 02:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:19.478109 | orchestrator | 2026-01-05 02:48:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:19.480071 | orchestrator | 2026-01-05 02:48:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:19.480140 | orchestrator | 2026-01-05 02:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:22.523327 | orchestrator | 2026-01-05 02:48:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:22.525502 | orchestrator | 2026-01-05 02:48:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:22.525580 | orchestrator | 2026-01-05 02:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:25.573991 | orchestrator | 2026-01-05 02:48:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:25.576475 | orchestrator | 2026-01-05 02:48:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:25.576545 | orchestrator | 2026-01-05 02:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:28.624984 | orchestrator | 2026-01-05 02:48:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:28.627257 | orchestrator | 2026-01-05 02:48:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:28.627537 | orchestrator | 2026-01-05 02:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:31.680899 | orchestrator | 2026-01-05 02:48:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:31.682817 | orchestrator | 2026-01-05 02:48:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:31.682898 | orchestrator | 2026-01-05 02:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:34.729781 | orchestrator | 2026-01-05 02:48:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:34.730783 | orchestrator | 2026-01-05 02:48:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:34.730874 | orchestrator | 2026-01-05 02:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:37.771200 | orchestrator | 2026-01-05 02:48:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:37.772501 | orchestrator | 2026-01-05 02:48:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:37.772570 | orchestrator | 2026-01-05 02:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:40.822911 | orchestrator | 2026-01-05 02:48:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:40.824195 | orchestrator | 2026-01-05 02:48:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:40.824212 | orchestrator | 2026-01-05 02:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:43.874666 | orchestrator | 2026-01-05 02:48:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:43.876267 | orchestrator | 2026-01-05 02:48:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:43.876322 | orchestrator | 2026-01-05 02:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:46.928263 | orchestrator | 2026-01-05 02:48:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:46.930439 | orchestrator | 2026-01-05 02:48:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:46.930554 | orchestrator | 2026-01-05 02:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:49.977751 | orchestrator | 2026-01-05 02:48:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:49.979357 | orchestrator | 2026-01-05 02:48:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:49.979488 | orchestrator | 2026-01-05 02:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:53.035414 | orchestrator | 2026-01-05 02:48:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:53.038202 | orchestrator | 2026-01-05 02:48:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:53.038754 | orchestrator | 2026-01-05 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:56.083102 | orchestrator | 2026-01-05 02:48:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:56.085616 | orchestrator | 2026-01-05 02:48:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:56.085724 | orchestrator | 2026-01-05 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:48:59.125741 | orchestrator | 2026-01-05 02:48:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:48:59.127075 | orchestrator | 2026-01-05 02:48:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:48:59.127184 | orchestrator | 2026-01-05 02:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:02.178221 | orchestrator | 2026-01-05 02:49:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:02.180201 | orchestrator | 2026-01-05 02:49:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:02.180246 | orchestrator | 2026-01-05 02:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:05.229291 | orchestrator | 2026-01-05 02:49:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:05.233853 | orchestrator | 2026-01-05 02:49:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:05.233946 | orchestrator | 2026-01-05 02:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:08.287133 | orchestrator | 2026-01-05 02:49:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:08.289257 | orchestrator | 2026-01-05 02:49:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:08.289335 | orchestrator | 2026-01-05 02:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:11.333870 | orchestrator | 2026-01-05 02:49:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:11.335592 | orchestrator | 2026-01-05 02:49:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:11.335754 | orchestrator | 2026-01-05 02:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:14.388170 | orchestrator | 2026-01-05 02:49:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:14.391412 | orchestrator | 2026-01-05 02:49:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:14.391525 | orchestrator | 2026-01-05 02:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:17.445077 | orchestrator | 2026-01-05 02:49:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:17.447814 | orchestrator | 2026-01-05 02:49:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:17.447882 | orchestrator | 2026-01-05 02:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:20.500240 | orchestrator | 2026-01-05 02:49:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:20.501156 | orchestrator | 2026-01-05 02:49:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:20.501192 | orchestrator | 2026-01-05 02:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:23.552118 | orchestrator | 2026-01-05 02:49:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:23.556214 | orchestrator | 2026-01-05 02:49:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:23.556310 | orchestrator | 2026-01-05 02:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:26.612913 | orchestrator | 2026-01-05 02:49:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:26.614859 | orchestrator | 2026-01-05 02:49:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:26.614954 | orchestrator | 2026-01-05 02:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:29.662532 | orchestrator | 2026-01-05 02:49:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:29.665269 | orchestrator | 2026-01-05 02:49:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:29.665337 | orchestrator | 2026-01-05 02:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:32.720098 | orchestrator | 2026-01-05 02:49:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:32.722561 | orchestrator | 2026-01-05 02:49:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:32.722657 | orchestrator | 2026-01-05 02:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:35.772796 | orchestrator | 2026-01-05 02:49:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:35.774550 | orchestrator | 2026-01-05 02:49:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:35.774639 | orchestrator | 2026-01-05 02:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:38.831815 | orchestrator | 2026-01-05 02:49:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:38.832962 | orchestrator | 2026-01-05 02:49:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:38.834311 | orchestrator | 2026-01-05 02:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:41.883485 | orchestrator | 2026-01-05 02:49:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:41.884286 | orchestrator | 2026-01-05 02:49:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:41.884318 | orchestrator | 2026-01-05 02:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:44.939907 | orchestrator | 2026-01-05 02:49:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:44.942961 | orchestrator | 2026-01-05 02:49:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:44.943047 | orchestrator | 2026-01-05 02:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:47.980830 | orchestrator | 2026-01-05 02:49:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:47.982263 | orchestrator | 2026-01-05 02:49:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:47.982382 | orchestrator | 2026-01-05 02:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:51.041671 | orchestrator | 2026-01-05 02:49:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:51.043327 | orchestrator | 2026-01-05 02:49:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:51.043489 | orchestrator | 2026-01-05 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:54.092514 | orchestrator | 2026-01-05 02:49:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:54.096159 | orchestrator | 2026-01-05 02:49:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:54.096273 | orchestrator | 2026-01-05 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:49:57.141685 | orchestrator | 2026-01-05 02:49:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:49:57.143967 | orchestrator | 2026-01-05 02:49:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:49:57.144037 | orchestrator | 2026-01-05 02:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:00.194335 | orchestrator | 2026-01-05 02:50:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:00.196437 | orchestrator | 2026-01-05 02:50:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:00.196549 | orchestrator | 2026-01-05 02:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:03.244877 | orchestrator | 2026-01-05 02:50:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:03.246869 | orchestrator | 2026-01-05 02:50:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:03.246926 | orchestrator | 2026-01-05 02:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:06.297273 | orchestrator | 2026-01-05 02:50:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:06.298263 | orchestrator | 2026-01-05 02:50:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:06.298336 | orchestrator | 2026-01-05 02:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:09.347873 | orchestrator | 2026-01-05 02:50:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:09.348898 | orchestrator | 2026-01-05 02:50:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:09.348939 | orchestrator | 2026-01-05 02:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:12.393949 | orchestrator | 2026-01-05 02:50:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:12.395173 | orchestrator | 2026-01-05 02:50:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:12.395237 | orchestrator | 2026-01-05 02:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:15.449326 | orchestrator | 2026-01-05 02:50:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:15.450819 | orchestrator | 2026-01-05 02:50:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:15.450864 | orchestrator | 2026-01-05 02:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:18.500067 | orchestrator | 2026-01-05 02:50:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:18.501645 | orchestrator | 2026-01-05 02:50:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:18.501787 | orchestrator | 2026-01-05 02:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:21.550427 | orchestrator | 2026-01-05 02:50:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:21.554114 | orchestrator | 2026-01-05 02:50:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:21.554240 | orchestrator | 2026-01-05 02:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:24.605710 | orchestrator | 2026-01-05 02:50:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:24.608050 | orchestrator | 2026-01-05 02:50:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:24.608127 | orchestrator | 2026-01-05 02:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:27.664059 | orchestrator | 2026-01-05 02:50:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:27.666924 | orchestrator | 2026-01-05 02:50:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:27.666993 | orchestrator | 2026-01-05 02:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:30.724785 | orchestrator | 2026-01-05 02:50:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:30.726542 | orchestrator | 2026-01-05 02:50:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:30.726603 | orchestrator | 2026-01-05 02:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:33.773053 | orchestrator | 2026-01-05 02:50:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:33.777004 | orchestrator | 2026-01-05 02:50:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:33.777162 | orchestrator | 2026-01-05 02:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:36.824181 | orchestrator | 2026-01-05 02:50:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:36.825557 | orchestrator | 2026-01-05 02:50:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:36.826131 | orchestrator | 2026-01-05 02:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:39.875011 | orchestrator | 2026-01-05 02:50:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:39.877403 | orchestrator | 2026-01-05 02:50:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:39.877461 | orchestrator | 2026-01-05 02:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:42.922634 | orchestrator | 2026-01-05 02:50:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:42.924841 | orchestrator | 2026-01-05 02:50:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:42.924908 | orchestrator | 2026-01-05 02:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:45.975558 | orchestrator | 2026-01-05 02:50:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:45.977058 | orchestrator | 2026-01-05 02:50:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:45.977091 | orchestrator | 2026-01-05 02:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:49.030724 | orchestrator | 2026-01-05 02:50:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:49.039909 | orchestrator | 2026-01-05 02:50:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:49.039996 | orchestrator | 2026-01-05 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:52.073034 | orchestrator | 2026-01-05 02:50:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:52.074371 | orchestrator | 2026-01-05 02:50:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:52.074429 | orchestrator | 2026-01-05 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:55.126094 | orchestrator | 2026-01-05 02:50:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:55.127452 | orchestrator | 2026-01-05 02:50:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:55.127562 | orchestrator | 2026-01-05 02:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:50:58.173452 | orchestrator | 2026-01-05 02:50:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:50:58.176564 | orchestrator | 2026-01-05 02:50:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:50:58.176645 | orchestrator | 2026-01-05 02:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:01.227477 | orchestrator | 2026-01-05 02:51:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:01.230008 | orchestrator | 2026-01-05 02:51:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:01.230140 | orchestrator | 2026-01-05 02:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:04.284280 | orchestrator | 2026-01-05 02:51:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:04.285416 | orchestrator | 2026-01-05 02:51:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:04.285502 | orchestrator | 2026-01-05 02:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:07.338599 | orchestrator | 2026-01-05 02:51:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:07.339960 | orchestrator | 2026-01-05 02:51:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:07.340156 | orchestrator | 2026-01-05 02:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:10.392096 | orchestrator | 2026-01-05 02:51:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:10.393991 | orchestrator | 2026-01-05 02:51:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:10.394103 | orchestrator | 2026-01-05 02:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:13.442505 | orchestrator | 2026-01-05 02:51:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:13.444324 | orchestrator | 2026-01-05 02:51:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:13.444410 | orchestrator | 2026-01-05 02:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:16.494356 | orchestrator | 2026-01-05 02:51:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:16.496203 | orchestrator | 2026-01-05 02:51:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:16.496250 | orchestrator | 2026-01-05 02:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:19.550227 | orchestrator | 2026-01-05 02:51:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:19.553363 | orchestrator | 2026-01-05 02:51:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:19.553455 | orchestrator | 2026-01-05 02:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:22.597042 | orchestrator | 2026-01-05 02:51:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:22.599080 | orchestrator | 2026-01-05 02:51:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:22.599151 | orchestrator | 2026-01-05 02:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:25.648352 | orchestrator | 2026-01-05 02:51:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:25.651307 | orchestrator | 2026-01-05 02:51:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:25.651424 | orchestrator | 2026-01-05 02:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:28.695769 | orchestrator | 2026-01-05 02:51:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:28.697790 | orchestrator | 2026-01-05 02:51:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:28.697830 | orchestrator | 2026-01-05 02:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:31.739682 | orchestrator | 2026-01-05 02:51:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:31.743239 | orchestrator | 2026-01-05 02:51:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:31.743316 | orchestrator | 2026-01-05 02:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:34.790073 | orchestrator | 2026-01-05 02:51:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:34.791833 | orchestrator | 2026-01-05 02:51:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:34.791897 | orchestrator | 2026-01-05 02:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:37.840038 | orchestrator | 2026-01-05 02:51:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:37.840694 | orchestrator | 2026-01-05 02:51:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:37.840746 | orchestrator | 2026-01-05 02:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:40.888171 | orchestrator | 2026-01-05 02:51:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:40.890655 | orchestrator | 2026-01-05 02:51:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:40.890869 | orchestrator | 2026-01-05 02:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:43.936756 | orchestrator | 2026-01-05 02:51:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:43.938342 | orchestrator | 2026-01-05 02:51:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:43.938492 | orchestrator | 2026-01-05 02:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:46.992216 | orchestrator | 2026-01-05 02:51:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:46.995372 | orchestrator | 2026-01-05 02:51:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:46.995433 | orchestrator | 2026-01-05 02:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:50.067320 | orchestrator | 2026-01-05 02:51:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:50.070465 | orchestrator | 2026-01-05 02:51:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:50.070538 | orchestrator | 2026-01-05 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:53.113443 | orchestrator | 2026-01-05 02:51:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:53.113529 | orchestrator | 2026-01-05 02:51:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:53.113536 | orchestrator | 2026-01-05 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:56.166616 | orchestrator | 2026-01-05 02:51:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:56.169440 | orchestrator | 2026-01-05 02:51:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:56.169524 | orchestrator | 2026-01-05 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:51:59.224127 | orchestrator | 2026-01-05 02:51:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:51:59.226123 | orchestrator | 2026-01-05 02:51:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:51:59.226184 | orchestrator | 2026-01-05 02:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:02.275480 | orchestrator | 2026-01-05 02:52:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:02.276369 | orchestrator | 2026-01-05 02:52:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:02.276441 | orchestrator | 2026-01-05 02:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:05.321806 | orchestrator | 2026-01-05 02:52:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:05.324196 | orchestrator | 2026-01-05 02:52:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:05.324247 | orchestrator | 2026-01-05 02:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:08.363276 | orchestrator | 2026-01-05 02:52:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:08.363361 | orchestrator | 2026-01-05 02:52:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:08.363369 | orchestrator | 2026-01-05 02:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:11.402193 | orchestrator | 2026-01-05 02:52:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:11.403839 | orchestrator | 2026-01-05 02:52:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:11.404026 | orchestrator | 2026-01-05 02:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:14.442929 | orchestrator | 2026-01-05 02:52:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:14.446204 | orchestrator | 2026-01-05 02:52:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:14.446502 | orchestrator | 2026-01-05 02:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:17.485452 | orchestrator | 2026-01-05 02:52:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:17.486953 | orchestrator | 2026-01-05 02:52:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:17.487009 | orchestrator | 2026-01-05 02:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:20.529341 | orchestrator | 2026-01-05 02:52:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:20.531483 | orchestrator | 2026-01-05 02:52:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:20.531632 | orchestrator | 2026-01-05 02:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:23.583984 | orchestrator | 2026-01-05 02:52:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:23.586335 | orchestrator | 2026-01-05 02:52:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:23.586438 | orchestrator | 2026-01-05 02:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:26.633395 | orchestrator | 2026-01-05 02:52:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:26.636163 | orchestrator | 2026-01-05 02:52:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:26.636231 | orchestrator | 2026-01-05 02:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:29.681722 | orchestrator | 2026-01-05 02:52:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:29.682699 | orchestrator | 2026-01-05 02:52:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:29.682753 | orchestrator | 2026-01-05 02:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:32.730164 | orchestrator | 2026-01-05 02:52:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:32.732454 | orchestrator | 2026-01-05 02:52:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:32.732624 | orchestrator | 2026-01-05 02:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:35.774168 | orchestrator | 2026-01-05 02:52:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:35.775664 | orchestrator | 2026-01-05 02:52:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:35.775713 | orchestrator | 2026-01-05 02:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:38.835311 | orchestrator | 2026-01-05 02:52:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:38.836929 | orchestrator | 2026-01-05 02:52:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:38.836995 | orchestrator | 2026-01-05 02:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:41.889364 | orchestrator | 2026-01-05 02:52:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:41.892322 | orchestrator | 2026-01-05 02:52:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:41.892503 | orchestrator | 2026-01-05 02:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:44.941059 | orchestrator | 2026-01-05 02:52:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:44.942970 | orchestrator | 2026-01-05 02:52:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:44.943030 | orchestrator | 2026-01-05 02:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:47.992831 | orchestrator | 2026-01-05 02:52:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:47.995512 | orchestrator | 2026-01-05 02:52:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:47.995615 | orchestrator | 2026-01-05 02:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:51.062164 | orchestrator | 2026-01-05 02:52:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:51.063731 | orchestrator | 2026-01-05 02:52:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:51.063789 | orchestrator | 2026-01-05 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:54.112316 | orchestrator | 2026-01-05 02:52:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:54.114182 | orchestrator | 2026-01-05 02:52:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:54.114266 | orchestrator | 2026-01-05 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:52:57.168925 | orchestrator | 2026-01-05 02:52:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:52:57.171005 | orchestrator | 2026-01-05 02:52:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:52:57.171083 | orchestrator | 2026-01-05 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:00.220330 | orchestrator | 2026-01-05 02:53:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:00.223244 | orchestrator | 2026-01-05 02:53:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:00.223320 | orchestrator | 2026-01-05 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:03.270853 | orchestrator | 2026-01-05 02:53:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:03.271677 | orchestrator | 2026-01-05 02:53:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:03.271697 | orchestrator | 2026-01-05 02:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:06.322918 | orchestrator | 2026-01-05 02:53:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:06.324634 | orchestrator | 2026-01-05 02:53:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:06.324758 | orchestrator | 2026-01-05 02:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:09.381650 | orchestrator | 2026-01-05 02:53:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:09.384621 | orchestrator | 2026-01-05 02:53:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:09.384726 | orchestrator | 2026-01-05 02:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:12.428970 | orchestrator | 2026-01-05 02:53:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:12.429270 | orchestrator | 2026-01-05 02:53:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:12.429314 | orchestrator | 2026-01-05 02:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:15.482998 | orchestrator | 2026-01-05 02:53:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:15.484237 | orchestrator | 2026-01-05 02:53:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:15.484289 | orchestrator | 2026-01-05 02:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:18.534986 | orchestrator | 2026-01-05 02:53:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:18.536465 | orchestrator | 2026-01-05 02:53:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:18.536519 | orchestrator | 2026-01-05 02:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:21.587178 | orchestrator | 2026-01-05 02:53:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:21.589148 | orchestrator | 2026-01-05 02:53:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:21.589238 | orchestrator | 2026-01-05 02:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:24.643972 | orchestrator | 2026-01-05 02:53:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:24.645748 | orchestrator | 2026-01-05 02:53:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:24.645853 | orchestrator | 2026-01-05 02:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:27.695295 | orchestrator | 2026-01-05 02:53:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:27.697433 | orchestrator | 2026-01-05 02:53:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:27.697491 | orchestrator | 2026-01-05 02:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:30.744353 | orchestrator | 2026-01-05 02:53:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:30.746163 | orchestrator | 2026-01-05 02:53:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:30.746209 | orchestrator | 2026-01-05 02:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:33.789059 | orchestrator | 2026-01-05 02:53:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:33.790059 | orchestrator | 2026-01-05 02:53:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:33.790117 | orchestrator | 2026-01-05 02:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:36.836330 | orchestrator | 2026-01-05 02:53:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:36.839563 | orchestrator | 2026-01-05 02:53:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:36.839888 | orchestrator | 2026-01-05 02:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:39.892688 | orchestrator | 2026-01-05 02:53:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:39.895398 | orchestrator | 2026-01-05 02:53:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:39.895465 | orchestrator | 2026-01-05 02:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:42.943559 | orchestrator | 2026-01-05 02:53:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:42.946344 | orchestrator | 2026-01-05 02:53:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:42.946392 | orchestrator | 2026-01-05 02:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:45.992395 | orchestrator | 2026-01-05 02:53:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:45.995284 | orchestrator | 2026-01-05 02:53:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:45.995345 | orchestrator | 2026-01-05 02:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:49.041320 | orchestrator | 2026-01-05 02:53:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:49.042793 | orchestrator | 2026-01-05 02:53:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:49.042874 | orchestrator | 2026-01-05 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:52.084275 | orchestrator | 2026-01-05 02:53:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:52.084674 | orchestrator | 2026-01-05 02:53:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:52.084715 | orchestrator | 2026-01-05 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:55.135311 | orchestrator | 2026-01-05 02:53:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:55.136941 | orchestrator | 2026-01-05 02:53:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:55.136978 | orchestrator | 2026-01-05 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:53:58.181530 | orchestrator | 2026-01-05 02:53:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:53:58.182170 | orchestrator | 2026-01-05 02:53:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:53:58.182197 | orchestrator | 2026-01-05 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:01.234557 | orchestrator | 2026-01-05 02:54:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:01.237917 | orchestrator | 2026-01-05 02:54:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:01.238055 | orchestrator | 2026-01-05 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:04.286401 | orchestrator | 2026-01-05 02:54:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:04.288804 | orchestrator | 2026-01-05 02:54:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:04.288901 | orchestrator | 2026-01-05 02:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:07.337088 | orchestrator | 2026-01-05 02:54:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:07.337740 | orchestrator | 2026-01-05 02:54:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:07.338060 | orchestrator | 2026-01-05 02:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:10.379979 | orchestrator | 2026-01-05 02:54:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:10.380709 | orchestrator | 2026-01-05 02:54:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:10.380903 | orchestrator | 2026-01-05 02:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:13.428847 | orchestrator | 2026-01-05 02:54:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:13.430884 | orchestrator | 2026-01-05 02:54:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:13.430933 | orchestrator | 2026-01-05 02:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:16.474126 | orchestrator | 2026-01-05 02:54:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:16.477243 | orchestrator | 2026-01-05 02:54:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:16.477331 | orchestrator | 2026-01-05 02:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:19.525956 | orchestrator | 2026-01-05 02:54:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:19.528029 | orchestrator | 2026-01-05 02:54:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:19.528400 | orchestrator | 2026-01-05 02:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:22.569142 | orchestrator | 2026-01-05 02:54:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:22.571258 | orchestrator | 2026-01-05 02:54:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:22.571344 | orchestrator | 2026-01-05 02:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:25.610337 | orchestrator | 2026-01-05 02:54:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:25.611519 | orchestrator | 2026-01-05 02:54:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:25.611553 | orchestrator | 2026-01-05 02:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:28.662200 | orchestrator | 2026-01-05 02:54:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:28.662277 | orchestrator | 2026-01-05 02:54:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:28.662284 | orchestrator | 2026-01-05 02:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:31.702825 | orchestrator | 2026-01-05 02:54:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:31.704255 | orchestrator | 2026-01-05 02:54:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:31.704348 | orchestrator | 2026-01-05 02:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:34.753883 | orchestrator | 2026-01-05 02:54:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:34.756357 | orchestrator | 2026-01-05 02:54:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:34.756427 | orchestrator | 2026-01-05 02:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:37.803037 | orchestrator | 2026-01-05 02:54:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:37.803164 | orchestrator | 2026-01-05 02:54:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:37.803187 | orchestrator | 2026-01-05 02:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:40.847339 | orchestrator | 2026-01-05 02:54:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:40.849167 | orchestrator | 2026-01-05 02:54:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:40.849223 | orchestrator | 2026-01-05 02:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:43.896818 | orchestrator | 2026-01-05 02:54:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:43.898297 | orchestrator | 2026-01-05 02:54:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:43.898355 | orchestrator | 2026-01-05 02:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:46.948528 | orchestrator | 2026-01-05 02:54:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:46.950647 | orchestrator | 2026-01-05 02:54:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:46.950745 | orchestrator | 2026-01-05 02:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:50.001443 | orchestrator | 2026-01-05 02:54:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:50.004859 | orchestrator | 2026-01-05 02:54:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:50.004991 | orchestrator | 2026-01-05 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:53.052058 | orchestrator | 2026-01-05 02:54:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:53.055044 | orchestrator | 2026-01-05 02:54:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:53.055118 | orchestrator | 2026-01-05 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:56.106444 | orchestrator | 2026-01-05 02:54:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:56.108157 | orchestrator | 2026-01-05 02:54:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:56.108223 | orchestrator | 2026-01-05 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:54:59.150648 | orchestrator | 2026-01-05 02:54:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:54:59.155437 | orchestrator | 2026-01-05 02:54:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:54:59.155515 | orchestrator | 2026-01-05 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:02.195476 | orchestrator | 2026-01-05 02:55:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:02.196507 | orchestrator | 2026-01-05 02:55:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:02.196568 | orchestrator | 2026-01-05 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:05.237037 | orchestrator | 2026-01-05 02:55:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:05.237564 | orchestrator | 2026-01-05 02:55:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:05.237606 | orchestrator | 2026-01-05 02:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:08.288495 | orchestrator | 2026-01-05 02:55:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:08.289956 | orchestrator | 2026-01-05 02:55:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:08.290113 | orchestrator | 2026-01-05 02:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:11.343150 | orchestrator | 2026-01-05 02:55:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:11.344607 | orchestrator | 2026-01-05 02:55:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:11.344943 | orchestrator | 2026-01-05 02:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:14.394683 | orchestrator | 2026-01-05 02:55:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:14.402918 | orchestrator | 2026-01-05 02:55:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:14.403054 | orchestrator | 2026-01-05 02:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:17.451219 | orchestrator | 2026-01-05 02:55:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:17.452746 | orchestrator | 2026-01-05 02:55:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:17.452811 | orchestrator | 2026-01-05 02:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:20.500232 | orchestrator | 2026-01-05 02:55:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:20.502332 | orchestrator | 2026-01-05 02:55:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:20.502403 | orchestrator | 2026-01-05 02:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:23.554366 | orchestrator | 2026-01-05 02:55:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:23.557039 | orchestrator | 2026-01-05 02:55:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:23.557109 | orchestrator | 2026-01-05 02:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:26.610827 | orchestrator | 2026-01-05 02:55:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:26.612351 | orchestrator | 2026-01-05 02:55:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:26.612451 | orchestrator | 2026-01-05 02:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:29.688994 | orchestrator | 2026-01-05 02:55:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:29.689088 | orchestrator | 2026-01-05 02:55:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:29.689120 | orchestrator | 2026-01-05 02:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:32.718339 | orchestrator | 2026-01-05 02:55:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:32.718436 | orchestrator | 2026-01-05 02:55:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:32.718444 | orchestrator | 2026-01-05 02:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:35.761198 | orchestrator | 2026-01-05 02:55:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:35.764175 | orchestrator | 2026-01-05 02:55:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:35.764237 | orchestrator | 2026-01-05 02:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:38.812325 | orchestrator | 2026-01-05 02:55:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:38.812978 | orchestrator | 2026-01-05 02:55:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:38.813430 | orchestrator | 2026-01-05 02:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:41.861981 | orchestrator | 2026-01-05 02:55:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:41.863700 | orchestrator | 2026-01-05 02:55:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:41.863761 | orchestrator | 2026-01-05 02:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:44.914270 | orchestrator | 2026-01-05 02:55:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:44.915139 | orchestrator | 2026-01-05 02:55:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:44.915214 | orchestrator | 2026-01-05 02:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:47.952193 | orchestrator | 2026-01-05 02:55:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:47.953513 | orchestrator | 2026-01-05 02:55:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:47.953541 | orchestrator | 2026-01-05 02:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:50.996263 | orchestrator | 2026-01-05 02:55:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:50.997683 | orchestrator | 2026-01-05 02:55:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:50.997822 | orchestrator | 2026-01-05 02:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:54.051271 | orchestrator | 2026-01-05 02:55:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:54.054186 | orchestrator | 2026-01-05 02:55:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:54.054267 | orchestrator | 2026-01-05 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:55:57.112562 | orchestrator | 2026-01-05 02:55:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:55:57.115194 | orchestrator | 2026-01-05 02:55:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:55:57.115284 | orchestrator | 2026-01-05 02:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:00.168399 | orchestrator | 2026-01-05 02:56:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:00.169065 | orchestrator | 2026-01-05 02:56:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:00.169124 | orchestrator | 2026-01-05 02:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:03.218169 | orchestrator | 2026-01-05 02:56:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:03.218896 | orchestrator | 2026-01-05 02:56:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:03.218999 | orchestrator | 2026-01-05 02:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:06.264688 | orchestrator | 2026-01-05 02:56:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:06.267034 | orchestrator | 2026-01-05 02:56:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:06.267103 | orchestrator | 2026-01-05 02:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:09.315825 | orchestrator | 2026-01-05 02:56:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:09.316281 | orchestrator | 2026-01-05 02:56:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:09.316300 | orchestrator | 2026-01-05 02:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:12.367171 | orchestrator | 2026-01-05 02:56:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:12.371205 | orchestrator | 2026-01-05 02:56:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:12.371278 | orchestrator | 2026-01-05 02:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:15.422130 | orchestrator | 2026-01-05 02:56:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:15.423790 | orchestrator | 2026-01-05 02:56:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:15.423890 | orchestrator | 2026-01-05 02:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:18.473515 | orchestrator | 2026-01-05 02:56:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:18.475197 | orchestrator | 2026-01-05 02:56:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:18.475278 | orchestrator | 2026-01-05 02:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:21.525930 | orchestrator | 2026-01-05 02:56:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:21.527821 | orchestrator | 2026-01-05 02:56:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:21.527953 | orchestrator | 2026-01-05 02:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:24.581737 | orchestrator | 2026-01-05 02:56:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:24.583218 | orchestrator | 2026-01-05 02:56:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:24.583365 | orchestrator | 2026-01-05 02:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:27.628542 | orchestrator | 2026-01-05 02:56:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:27.631239 | orchestrator | 2026-01-05 02:56:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:27.631310 | orchestrator | 2026-01-05 02:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:30.687457 | orchestrator | 2026-01-05 02:56:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:30.688983 | orchestrator | 2026-01-05 02:56:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:30.689027 | orchestrator | 2026-01-05 02:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:33.741445 | orchestrator | 2026-01-05 02:56:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:33.743415 | orchestrator | 2026-01-05 02:56:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:33.743477 | orchestrator | 2026-01-05 02:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:36.785987 | orchestrator | 2026-01-05 02:56:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:36.788025 | orchestrator | 2026-01-05 02:56:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:36.788067 | orchestrator | 2026-01-05 02:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:39.831951 | orchestrator | 2026-01-05 02:56:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:39.832489 | orchestrator | 2026-01-05 02:56:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:39.832522 | orchestrator | 2026-01-05 02:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:42.873333 | orchestrator | 2026-01-05 02:56:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:42.875740 | orchestrator | 2026-01-05 02:56:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:42.875992 | orchestrator | 2026-01-05 02:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:45.917010 | orchestrator | 2026-01-05 02:56:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:45.918848 | orchestrator | 2026-01-05 02:56:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:45.918903 | orchestrator | 2026-01-05 02:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:48.963426 | orchestrator | 2026-01-05 02:56:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:48.965691 | orchestrator | 2026-01-05 02:56:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:48.965800 | orchestrator | 2026-01-05 02:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:52.008966 | orchestrator | 2026-01-05 02:56:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:52.009821 | orchestrator | 2026-01-05 02:56:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:52.009902 | orchestrator | 2026-01-05 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:55.058560 | orchestrator | 2026-01-05 02:56:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:55.060434 | orchestrator | 2026-01-05 02:56:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:55.060506 | orchestrator | 2026-01-05 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:56:58.105721 | orchestrator | 2026-01-05 02:56:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:56:58.106635 | orchestrator | 2026-01-05 02:56:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:56:58.106721 | orchestrator | 2026-01-05 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:01.150706 | orchestrator | 2026-01-05 02:57:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:01.150961 | orchestrator | 2026-01-05 02:57:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:01.150993 | orchestrator | 2026-01-05 02:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:04.196715 | orchestrator | 2026-01-05 02:57:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:04.197016 | orchestrator | 2026-01-05 02:57:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:04.197044 | orchestrator | 2026-01-05 02:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:07.248989 | orchestrator | 2026-01-05 02:57:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:07.250957 | orchestrator | 2026-01-05 02:57:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:07.251005 | orchestrator | 2026-01-05 02:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:10.298428 | orchestrator | 2026-01-05 02:57:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:10.300655 | orchestrator | 2026-01-05 02:57:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:10.300793 | orchestrator | 2026-01-05 02:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:13.354812 | orchestrator | 2026-01-05 02:57:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:13.357443 | orchestrator | 2026-01-05 02:57:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:13.357511 | orchestrator | 2026-01-05 02:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:16.411962 | orchestrator | 2026-01-05 02:57:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:16.414330 | orchestrator | 2026-01-05 02:57:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:16.414434 | orchestrator | 2026-01-05 02:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:19.461465 | orchestrator | 2026-01-05 02:57:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:19.464238 | orchestrator | 2026-01-05 02:57:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:19.464293 | orchestrator | 2026-01-05 02:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:22.518125 | orchestrator | 2026-01-05 02:57:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:22.519282 | orchestrator | 2026-01-05 02:57:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:22.519511 | orchestrator | 2026-01-05 02:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:25.567321 | orchestrator | 2026-01-05 02:57:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:25.569689 | orchestrator | 2026-01-05 02:57:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:25.569748 | orchestrator | 2026-01-05 02:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:28.625849 | orchestrator | 2026-01-05 02:57:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:28.628621 | orchestrator | 2026-01-05 02:57:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:28.628687 | orchestrator | 2026-01-05 02:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:31.682233 | orchestrator | 2026-01-05 02:57:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:31.686313 | orchestrator | 2026-01-05 02:57:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:31.686392 | orchestrator | 2026-01-05 02:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:34.733067 | orchestrator | 2026-01-05 02:57:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:34.735121 | orchestrator | 2026-01-05 02:57:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:34.735212 | orchestrator | 2026-01-05 02:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:37.787396 | orchestrator | 2026-01-05 02:57:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:37.789245 | orchestrator | 2026-01-05 02:57:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:37.789329 | orchestrator | 2026-01-05 02:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:40.840887 | orchestrator | 2026-01-05 02:57:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:40.844756 | orchestrator | 2026-01-05 02:57:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:40.845038 | orchestrator | 2026-01-05 02:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:43.898380 | orchestrator | 2026-01-05 02:57:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:43.899479 | orchestrator | 2026-01-05 02:57:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:43.899533 | orchestrator | 2026-01-05 02:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:46.949052 | orchestrator | 2026-01-05 02:57:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:46.951342 | orchestrator | 2026-01-05 02:57:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:46.951377 | orchestrator | 2026-01-05 02:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:49.997294 | orchestrator | 2026-01-05 02:57:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:50.000025 | orchestrator | 2026-01-05 02:57:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:50.000082 | orchestrator | 2026-01-05 02:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:53.046528 | orchestrator | 2026-01-05 02:57:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:53.049480 | orchestrator | 2026-01-05 02:57:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:53.049555 | orchestrator | 2026-01-05 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:56.103219 | orchestrator | 2026-01-05 02:57:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:56.104565 | orchestrator | 2026-01-05 02:57:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:56.104641 | orchestrator | 2026-01-05 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:57:59.156153 | orchestrator | 2026-01-05 02:57:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:57:59.158338 | orchestrator | 2026-01-05 02:57:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:57:59.158440 | orchestrator | 2026-01-05 02:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:02.206479 | orchestrator | 2026-01-05 02:58:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:02.207098 | orchestrator | 2026-01-05 02:58:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:02.207132 | orchestrator | 2026-01-05 02:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:05.259588 | orchestrator | 2026-01-05 02:58:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:05.260542 | orchestrator | 2026-01-05 02:58:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:05.260578 | orchestrator | 2026-01-05 02:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:08.309612 | orchestrator | 2026-01-05 02:58:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:08.312381 | orchestrator | 2026-01-05 02:58:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:08.312459 | orchestrator | 2026-01-05 02:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:11.377913 | orchestrator | 2026-01-05 02:58:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:11.379185 | orchestrator | 2026-01-05 02:58:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:11.379371 | orchestrator | 2026-01-05 02:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:14.430609 | orchestrator | 2026-01-05 02:58:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:14.431865 | orchestrator | 2026-01-05 02:58:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:14.432278 | orchestrator | 2026-01-05 02:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:17.479633 | orchestrator | 2026-01-05 02:58:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:17.482454 | orchestrator | 2026-01-05 02:58:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:17.482582 | orchestrator | 2026-01-05 02:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:20.527499 | orchestrator | 2026-01-05 02:58:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:20.529692 | orchestrator | 2026-01-05 02:58:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:20.529748 | orchestrator | 2026-01-05 02:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:23.571243 | orchestrator | 2026-01-05 02:58:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:23.572159 | orchestrator | 2026-01-05 02:58:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:23.572188 | orchestrator | 2026-01-05 02:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:26.625933 | orchestrator | 2026-01-05 02:58:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:26.627389 | orchestrator | 2026-01-05 02:58:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:26.627425 | orchestrator | 2026-01-05 02:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:29.675376 | orchestrator | 2026-01-05 02:58:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:29.676782 | orchestrator | 2026-01-05 02:58:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:29.676888 | orchestrator | 2026-01-05 02:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:32.731184 | orchestrator | 2026-01-05 02:58:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:32.732327 | orchestrator | 2026-01-05 02:58:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:32.732385 | orchestrator | 2026-01-05 02:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:35.783429 | orchestrator | 2026-01-05 02:58:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:35.785268 | orchestrator | 2026-01-05 02:58:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:35.785325 | orchestrator | 2026-01-05 02:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:38.828216 | orchestrator | 2026-01-05 02:58:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:38.828734 | orchestrator | 2026-01-05 02:58:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:38.828771 | orchestrator | 2026-01-05 02:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:41.871198 | orchestrator | 2026-01-05 02:58:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:41.875387 | orchestrator | 2026-01-05 02:58:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:41.875473 | orchestrator | 2026-01-05 02:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:44.925166 | orchestrator | 2026-01-05 02:58:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:44.927422 | orchestrator | 2026-01-05 02:58:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:44.927502 | orchestrator | 2026-01-05 02:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:47.978002 | orchestrator | 2026-01-05 02:58:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:47.979784 | orchestrator | 2026-01-05 02:58:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:47.979886 | orchestrator | 2026-01-05 02:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:51.031452 | orchestrator | 2026-01-05 02:58:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:51.033321 | orchestrator | 2026-01-05 02:58:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:51.033370 | orchestrator | 2026-01-05 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:54.073012 | orchestrator | 2026-01-05 02:58:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:54.076101 | orchestrator | 2026-01-05 02:58:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:54.076979 | orchestrator | 2026-01-05 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:58:57.135073 | orchestrator | 2026-01-05 02:58:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:58:57.137249 | orchestrator | 2026-01-05 02:58:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:58:57.137307 | orchestrator | 2026-01-05 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:00.192467 | orchestrator | 2026-01-05 02:59:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:00.194775 | orchestrator | 2026-01-05 02:59:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:00.194871 | orchestrator | 2026-01-05 02:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:03.242885 | orchestrator | 2026-01-05 02:59:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:03.244773 | orchestrator | 2026-01-05 02:59:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:03.244896 | orchestrator | 2026-01-05 02:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:06.288775 | orchestrator | 2026-01-05 02:59:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:06.290943 | orchestrator | 2026-01-05 02:59:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:06.290995 | orchestrator | 2026-01-05 02:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:09.343484 | orchestrator | 2026-01-05 02:59:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:09.345257 | orchestrator | 2026-01-05 02:59:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:09.345426 | orchestrator | 2026-01-05 02:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:12.387517 | orchestrator | 2026-01-05 02:59:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:12.390010 | orchestrator | 2026-01-05 02:59:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:12.390173 | orchestrator | 2026-01-05 02:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:15.439900 | orchestrator | 2026-01-05 02:59:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:15.441742 | orchestrator | 2026-01-05 02:59:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:15.441814 | orchestrator | 2026-01-05 02:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:18.482917 | orchestrator | 2026-01-05 02:59:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:18.483709 | orchestrator | 2026-01-05 02:59:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:18.483743 | orchestrator | 2026-01-05 02:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:21.526414 | orchestrator | 2026-01-05 02:59:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:21.528319 | orchestrator | 2026-01-05 02:59:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:21.528526 | orchestrator | 2026-01-05 02:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:24.577726 | orchestrator | 2026-01-05 02:59:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:24.578965 | orchestrator | 2026-01-05 02:59:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:24.579040 | orchestrator | 2026-01-05 02:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:27.624760 | orchestrator | 2026-01-05 02:59:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:27.625973 | orchestrator | 2026-01-05 02:59:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:27.626074 | orchestrator | 2026-01-05 02:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:30.672781 | orchestrator | 2026-01-05 02:59:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:30.675008 | orchestrator | 2026-01-05 02:59:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:30.675086 | orchestrator | 2026-01-05 02:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:33.722579 | orchestrator | 2026-01-05 02:59:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:33.724077 | orchestrator | 2026-01-05 02:59:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:33.724113 | orchestrator | 2026-01-05 02:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:36.770244 | orchestrator | 2026-01-05 02:59:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:36.771757 | orchestrator | 2026-01-05 02:59:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:36.771801 | orchestrator | 2026-01-05 02:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:39.813418 | orchestrator | 2026-01-05 02:59:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:39.815131 | orchestrator | 2026-01-05 02:59:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:39.815185 | orchestrator | 2026-01-05 02:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:42.867085 | orchestrator | 2026-01-05 02:59:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:42.867708 | orchestrator | 2026-01-05 02:59:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:42.867745 | orchestrator | 2026-01-05 02:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:45.923048 | orchestrator | 2026-01-05 02:59:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:45.924303 | orchestrator | 2026-01-05 02:59:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:45.924806 | orchestrator | 2026-01-05 02:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:48.977056 | orchestrator | 2026-01-05 02:59:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:48.978918 | orchestrator | 2026-01-05 02:59:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:48.978983 | orchestrator | 2026-01-05 02:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:52.027941 | orchestrator | 2026-01-05 02:59:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:52.030098 | orchestrator | 2026-01-05 02:59:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:52.030167 | orchestrator | 2026-01-05 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:55.078778 | orchestrator | 2026-01-05 02:59:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:55.079309 | orchestrator | 2026-01-05 02:59:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:55.079346 | orchestrator | 2026-01-05 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 02:59:58.130257 | orchestrator | 2026-01-05 02:59:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 02:59:58.133079 | orchestrator | 2026-01-05 02:59:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 02:59:58.133167 | orchestrator | 2026-01-05 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:01.182234 | orchestrator | 2026-01-05 03:00:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:01.184768 | orchestrator | 2026-01-05 03:00:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:01.184879 | orchestrator | 2026-01-05 03:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:04.230852 | orchestrator | 2026-01-05 03:00:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:04.233325 | orchestrator | 2026-01-05 03:00:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:04.233570 | orchestrator | 2026-01-05 03:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:07.278451 | orchestrator | 2026-01-05 03:00:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:07.279654 | orchestrator | 2026-01-05 03:00:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:07.279715 | orchestrator | 2026-01-05 03:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:10.326947 | orchestrator | 2026-01-05 03:00:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:10.329227 | orchestrator | 2026-01-05 03:00:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:10.329422 | orchestrator | 2026-01-05 03:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:13.379080 | orchestrator | 2026-01-05 03:00:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:13.382082 | orchestrator | 2026-01-05 03:00:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:13.382164 | orchestrator | 2026-01-05 03:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:16.435969 | orchestrator | 2026-01-05 03:00:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:16.437847 | orchestrator | 2026-01-05 03:00:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:16.437940 | orchestrator | 2026-01-05 03:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:19.485259 | orchestrator | 2026-01-05 03:00:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:19.486832 | orchestrator | 2026-01-05 03:00:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:19.486898 | orchestrator | 2026-01-05 03:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:22.538220 | orchestrator | 2026-01-05 03:00:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:22.539024 | orchestrator | 2026-01-05 03:00:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:22.539068 | orchestrator | 2026-01-05 03:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:25.589574 | orchestrator | 2026-01-05 03:00:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:25.590676 | orchestrator | 2026-01-05 03:00:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:25.590787 | orchestrator | 2026-01-05 03:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:28.639102 | orchestrator | 2026-01-05 03:00:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:28.640856 | orchestrator | 2026-01-05 03:00:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:28.640898 | orchestrator | 2026-01-05 03:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:31.685424 | orchestrator | 2026-01-05 03:00:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:31.686073 | orchestrator | 2026-01-05 03:00:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:31.686342 | orchestrator | 2026-01-05 03:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:34.739646 | orchestrator | 2026-01-05 03:00:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:34.741065 | orchestrator | 2026-01-05 03:00:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:34.741123 | orchestrator | 2026-01-05 03:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:37.796118 | orchestrator | 2026-01-05 03:00:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:37.796702 | orchestrator | 2026-01-05 03:00:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:37.796727 | orchestrator | 2026-01-05 03:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:40.843144 | orchestrator | 2026-01-05 03:00:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:40.843264 | orchestrator | 2026-01-05 03:00:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:40.843274 | orchestrator | 2026-01-05 03:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:43.888899 | orchestrator | 2026-01-05 03:00:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:43.889321 | orchestrator | 2026-01-05 03:00:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:43.889629 | orchestrator | 2026-01-05 03:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:46.933789 | orchestrator | 2026-01-05 03:00:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:46.933913 | orchestrator | 2026-01-05 03:00:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:46.933925 | orchestrator | 2026-01-05 03:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:49.979220 | orchestrator | 2026-01-05 03:00:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:49.981374 | orchestrator | 2026-01-05 03:00:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:49.981438 | orchestrator | 2026-01-05 03:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:53.040587 | orchestrator | 2026-01-05 03:00:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:53.040726 | orchestrator | 2026-01-05 03:00:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:53.040739 | orchestrator | 2026-01-05 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:56.090311 | orchestrator | 2026-01-05 03:00:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:56.093995 | orchestrator | 2026-01-05 03:00:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:56.094119 | orchestrator | 2026-01-05 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:00:59.143922 | orchestrator | 2026-01-05 03:00:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:00:59.145917 | orchestrator | 2026-01-05 03:00:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:00:59.145964 | orchestrator | 2026-01-05 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:02.200768 | orchestrator | 2026-01-05 03:01:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:02.202278 | orchestrator | 2026-01-05 03:01:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:02.202333 | orchestrator | 2026-01-05 03:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:05.253452 | orchestrator | 2026-01-05 03:01:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:05.255099 | orchestrator | 2026-01-05 03:01:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:05.255140 | orchestrator | 2026-01-05 03:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:08.308646 | orchestrator | 2026-01-05 03:01:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:08.311672 | orchestrator | 2026-01-05 03:01:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:08.311771 | orchestrator | 2026-01-05 03:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:11.363226 | orchestrator | 2026-01-05 03:01:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:11.365767 | orchestrator | 2026-01-05 03:01:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:11.365849 | orchestrator | 2026-01-05 03:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:14.415148 | orchestrator | 2026-01-05 03:01:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:14.417310 | orchestrator | 2026-01-05 03:01:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:14.417469 | orchestrator | 2026-01-05 03:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:17.468700 | orchestrator | 2026-01-05 03:01:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:17.468808 | orchestrator | 2026-01-05 03:01:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:17.468825 | orchestrator | 2026-01-05 03:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:20.518065 | orchestrator | 2026-01-05 03:01:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:20.519287 | orchestrator | 2026-01-05 03:01:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:20.519551 | orchestrator | 2026-01-05 03:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:23.572342 | orchestrator | 2026-01-05 03:01:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:23.574758 | orchestrator | 2026-01-05 03:01:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:23.574888 | orchestrator | 2026-01-05 03:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:26.622967 | orchestrator | 2026-01-05 03:01:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:26.627209 | orchestrator | 2026-01-05 03:01:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:26.627297 | orchestrator | 2026-01-05 03:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:29.685363 | orchestrator | 2026-01-05 03:01:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:29.687980 | orchestrator | 2026-01-05 03:01:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:29.688123 | orchestrator | 2026-01-05 03:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:32.742499 | orchestrator | 2026-01-05 03:01:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:32.744061 | orchestrator | 2026-01-05 03:01:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:32.744130 | orchestrator | 2026-01-05 03:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:35.795037 | orchestrator | 2026-01-05 03:01:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:35.798221 | orchestrator | 2026-01-05 03:01:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:35.798291 | orchestrator | 2026-01-05 03:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:38.854125 | orchestrator | 2026-01-05 03:01:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:38.856538 | orchestrator | 2026-01-05 03:01:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:38.856583 | orchestrator | 2026-01-05 03:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:41.903187 | orchestrator | 2026-01-05 03:01:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:41.903729 | orchestrator | 2026-01-05 03:01:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:41.903762 | orchestrator | 2026-01-05 03:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:44.954994 | orchestrator | 2026-01-05 03:01:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:44.956551 | orchestrator | 2026-01-05 03:01:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:44.956586 | orchestrator | 2026-01-05 03:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:48.018947 | orchestrator | 2026-01-05 03:01:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:48.021706 | orchestrator | 2026-01-05 03:01:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:48.021799 | orchestrator | 2026-01-05 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:51.068988 | orchestrator | 2026-01-05 03:01:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:51.071951 | orchestrator | 2026-01-05 03:01:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:51.072028 | orchestrator | 2026-01-05 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:54.117004 | orchestrator | 2026-01-05 03:01:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:54.117120 | orchestrator | 2026-01-05 03:01:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:54.117151 | orchestrator | 2026-01-05 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:01:57.166290 | orchestrator | 2026-01-05 03:01:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:01:57.166806 | orchestrator | 2026-01-05 03:01:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:01:57.166886 | orchestrator | 2026-01-05 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:00.210603 | orchestrator | 2026-01-05 03:02:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:00.211829 | orchestrator | 2026-01-05 03:02:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:00.212481 | orchestrator | 2026-01-05 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:03.265941 | orchestrator | 2026-01-05 03:02:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:03.268232 | orchestrator | 2026-01-05 03:02:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:03.268333 | orchestrator | 2026-01-05 03:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:06.311806 | orchestrator | 2026-01-05 03:02:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:06.314607 | orchestrator | 2026-01-05 03:02:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:06.314686 | orchestrator | 2026-01-05 03:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:09.365221 | orchestrator | 2026-01-05 03:02:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:09.368552 | orchestrator | 2026-01-05 03:02:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:09.368667 | orchestrator | 2026-01-05 03:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:12.419859 | orchestrator | 2026-01-05 03:02:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:12.422743 | orchestrator | 2026-01-05 03:02:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:12.423686 | orchestrator | 2026-01-05 03:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:15.476442 | orchestrator | 2026-01-05 03:02:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:15.477385 | orchestrator | 2026-01-05 03:02:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:15.477831 | orchestrator | 2026-01-05 03:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:18.526714 | orchestrator | 2026-01-05 03:02:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:18.527470 | orchestrator | 2026-01-05 03:02:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:18.527567 | orchestrator | 2026-01-05 03:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:21.579167 | orchestrator | 2026-01-05 03:02:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:21.580701 | orchestrator | 2026-01-05 03:02:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:21.580750 | orchestrator | 2026-01-05 03:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:24.627172 | orchestrator | 2026-01-05 03:02:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:24.629238 | orchestrator | 2026-01-05 03:02:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:24.629314 | orchestrator | 2026-01-05 03:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:27.678707 | orchestrator | 2026-01-05 03:02:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:27.679443 | orchestrator | 2026-01-05 03:02:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:27.679468 | orchestrator | 2026-01-05 03:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:30.726947 | orchestrator | 2026-01-05 03:02:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:30.728353 | orchestrator | 2026-01-05 03:02:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:30.728409 | orchestrator | 2026-01-05 03:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:33.781577 | orchestrator | 2026-01-05 03:02:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:33.784126 | orchestrator | 2026-01-05 03:02:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:33.784199 | orchestrator | 2026-01-05 03:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:36.830302 | orchestrator | 2026-01-05 03:02:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:36.832099 | orchestrator | 2026-01-05 03:02:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:36.832136 | orchestrator | 2026-01-05 03:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:39.885966 | orchestrator | 2026-01-05 03:02:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:39.888793 | orchestrator | 2026-01-05 03:02:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:39.888887 | orchestrator | 2026-01-05 03:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:42.936612 | orchestrator | 2026-01-05 03:02:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:42.937971 | orchestrator | 2026-01-05 03:02:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:42.938038 | orchestrator | 2026-01-05 03:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:45.987588 | orchestrator | 2026-01-05 03:02:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:45.990266 | orchestrator | 2026-01-05 03:02:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:45.990332 | orchestrator | 2026-01-05 03:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:49.044823 | orchestrator | 2026-01-05 03:02:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:49.045759 | orchestrator | 2026-01-05 03:02:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:49.045811 | orchestrator | 2026-01-05 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:52.084233 | orchestrator | 2026-01-05 03:02:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:52.084376 | orchestrator | 2026-01-05 03:02:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:52.084389 | orchestrator | 2026-01-05 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:55.127527 | orchestrator | 2026-01-05 03:02:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:55.131360 | orchestrator | 2026-01-05 03:02:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:55.131430 | orchestrator | 2026-01-05 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:02:58.180296 | orchestrator | 2026-01-05 03:02:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:02:58.181705 | orchestrator | 2026-01-05 03:02:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:02:58.181787 | orchestrator | 2026-01-05 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:01.233038 | orchestrator | 2026-01-05 03:03:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:01.234375 | orchestrator | 2026-01-05 03:03:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:01.234450 | orchestrator | 2026-01-05 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:04.291178 | orchestrator | 2026-01-05 03:03:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:04.294583 | orchestrator | 2026-01-05 03:03:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:04.294701 | orchestrator | 2026-01-05 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:07.344994 | orchestrator | 2026-01-05 03:03:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:07.349041 | orchestrator | 2026-01-05 03:03:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:07.349107 | orchestrator | 2026-01-05 03:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:10.404779 | orchestrator | 2026-01-05 03:03:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:10.408949 | orchestrator | 2026-01-05 03:03:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:10.409059 | orchestrator | 2026-01-05 03:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:13.458197 | orchestrator | 2026-01-05 03:03:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:13.462612 | orchestrator | 2026-01-05 03:03:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:13.462683 | orchestrator | 2026-01-05 03:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:16.520215 | orchestrator | 2026-01-05 03:03:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:16.523740 | orchestrator | 2026-01-05 03:03:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:16.523824 | orchestrator | 2026-01-05 03:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:19.571690 | orchestrator | 2026-01-05 03:03:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:19.575182 | orchestrator | 2026-01-05 03:03:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:19.575275 | orchestrator | 2026-01-05 03:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:22.622537 | orchestrator | 2026-01-05 03:03:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:22.623698 | orchestrator | 2026-01-05 03:03:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:22.623758 | orchestrator | 2026-01-05 03:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:25.673039 | orchestrator | 2026-01-05 03:03:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:25.674656 | orchestrator | 2026-01-05 03:03:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:25.674707 | orchestrator | 2026-01-05 03:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:28.724204 | orchestrator | 2026-01-05 03:03:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:28.727393 | orchestrator | 2026-01-05 03:03:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:28.727454 | orchestrator | 2026-01-05 03:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:31.773365 | orchestrator | 2026-01-05 03:03:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:31.774084 | orchestrator | 2026-01-05 03:03:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:31.774111 | orchestrator | 2026-01-05 03:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:34.819964 | orchestrator | 2026-01-05 03:03:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:34.821671 | orchestrator | 2026-01-05 03:03:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:34.821769 | orchestrator | 2026-01-05 03:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:37.869624 | orchestrator | 2026-01-05 03:03:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:37.873318 | orchestrator | 2026-01-05 03:03:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:37.873415 | orchestrator | 2026-01-05 03:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:40.925805 | orchestrator | 2026-01-05 03:03:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:40.928006 | orchestrator | 2026-01-05 03:03:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:40.928076 | orchestrator | 2026-01-05 03:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:43.971746 | orchestrator | 2026-01-05 03:03:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:43.973638 | orchestrator | 2026-01-05 03:03:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:43.973751 | orchestrator | 2026-01-05 03:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:47.026162 | orchestrator | 2026-01-05 03:03:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:47.027728 | orchestrator | 2026-01-05 03:03:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:47.027774 | orchestrator | 2026-01-05 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:50.077981 | orchestrator | 2026-01-05 03:03:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:50.079854 | orchestrator | 2026-01-05 03:03:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:50.079922 | orchestrator | 2026-01-05 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:53.124114 | orchestrator | 2026-01-05 03:03:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:53.124344 | orchestrator | 2026-01-05 03:03:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:53.124365 | orchestrator | 2026-01-05 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:56.176045 | orchestrator | 2026-01-05 03:03:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:56.177932 | orchestrator | 2026-01-05 03:03:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:56.178079 | orchestrator | 2026-01-05 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:03:59.226998 | orchestrator | 2026-01-05 03:03:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:03:59.230163 | orchestrator | 2026-01-05 03:03:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:03:59.230240 | orchestrator | 2026-01-05 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:02.286200 | orchestrator | 2026-01-05 03:04:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:02.287767 | orchestrator | 2026-01-05 03:04:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:02.287815 | orchestrator | 2026-01-05 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:05.341662 | orchestrator | 2026-01-05 03:04:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:05.342321 | orchestrator | 2026-01-05 03:04:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:05.342508 | orchestrator | 2026-01-05 03:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:08.387368 | orchestrator | 2026-01-05 03:04:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:08.389293 | orchestrator | 2026-01-05 03:04:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:08.389357 | orchestrator | 2026-01-05 03:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:11.431131 | orchestrator | 2026-01-05 03:04:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:11.433066 | orchestrator | 2026-01-05 03:04:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:11.433299 | orchestrator | 2026-01-05 03:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:14.479323 | orchestrator | 2026-01-05 03:04:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:14.480431 | orchestrator | 2026-01-05 03:04:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:14.480490 | orchestrator | 2026-01-05 03:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:17.531140 | orchestrator | 2026-01-05 03:04:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:17.533463 | orchestrator | 2026-01-05 03:04:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:17.533533 | orchestrator | 2026-01-05 03:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:20.576109 | orchestrator | 2026-01-05 03:04:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:20.576736 | orchestrator | 2026-01-05 03:04:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:20.576802 | orchestrator | 2026-01-05 03:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:23.628682 | orchestrator | 2026-01-05 03:04:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:23.631123 | orchestrator | 2026-01-05 03:04:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:23.631187 | orchestrator | 2026-01-05 03:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:26.681633 | orchestrator | 2026-01-05 03:04:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:26.683834 | orchestrator | 2026-01-05 03:04:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:26.683875 | orchestrator | 2026-01-05 03:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:29.731691 | orchestrator | 2026-01-05 03:04:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:29.732724 | orchestrator | 2026-01-05 03:04:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:29.732866 | orchestrator | 2026-01-05 03:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:32.777817 | orchestrator | 2026-01-05 03:04:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:32.780173 | orchestrator | 2026-01-05 03:04:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:32.780239 | orchestrator | 2026-01-05 03:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:35.827693 | orchestrator | 2026-01-05 03:04:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:35.830174 | orchestrator | 2026-01-05 03:04:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:35.830307 | orchestrator | 2026-01-05 03:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:38.878696 | orchestrator | 2026-01-05 03:04:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:38.880335 | orchestrator | 2026-01-05 03:04:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:38.880440 | orchestrator | 2026-01-05 03:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:41.925626 | orchestrator | 2026-01-05 03:04:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:41.927350 | orchestrator | 2026-01-05 03:04:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:41.927398 | orchestrator | 2026-01-05 03:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:44.971377 | orchestrator | 2026-01-05 03:04:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:44.972927 | orchestrator | 2026-01-05 03:04:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:44.972973 | orchestrator | 2026-01-05 03:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:48.019849 | orchestrator | 2026-01-05 03:04:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:48.021324 | orchestrator | 2026-01-05 03:04:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:48.021376 | orchestrator | 2026-01-05 03:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:51.068510 | orchestrator | 2026-01-05 03:04:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:51.069611 | orchestrator | 2026-01-05 03:04:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:51.069759 | orchestrator | 2026-01-05 03:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:54.112228 | orchestrator | 2026-01-05 03:04:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:54.113001 | orchestrator | 2026-01-05 03:04:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:54.113089 | orchestrator | 2026-01-05 03:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:04:57.161556 | orchestrator | 2026-01-05 03:04:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:04:57.165182 | orchestrator | 2026-01-05 03:04:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:04:57.165252 | orchestrator | 2026-01-05 03:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:00.218408 | orchestrator | 2026-01-05 03:05:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:00.221178 | orchestrator | 2026-01-05 03:05:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:00.221248 | orchestrator | 2026-01-05 03:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:03.262902 | orchestrator | 2026-01-05 03:05:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:03.264312 | orchestrator | 2026-01-05 03:05:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:03.264481 | orchestrator | 2026-01-05 03:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:06.306626 | orchestrator | 2026-01-05 03:05:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:06.307177 | orchestrator | 2026-01-05 03:05:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:06.307303 | orchestrator | 2026-01-05 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:09.362393 | orchestrator | 2026-01-05 03:05:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:09.365435 | orchestrator | 2026-01-05 03:05:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:09.365768 | orchestrator | 2026-01-05 03:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:12.409154 | orchestrator | 2026-01-05 03:05:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:12.411730 | orchestrator | 2026-01-05 03:05:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:12.411805 | orchestrator | 2026-01-05 03:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:15.462089 | orchestrator | 2026-01-05 03:05:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:15.465404 | orchestrator | 2026-01-05 03:05:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:15.465482 | orchestrator | 2026-01-05 03:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:18.521085 | orchestrator | 2026-01-05 03:05:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:18.522303 | orchestrator | 2026-01-05 03:05:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:18.522348 | orchestrator | 2026-01-05 03:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:21.573534 | orchestrator | 2026-01-05 03:05:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:21.574157 | orchestrator | 2026-01-05 03:05:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:21.574179 | orchestrator | 2026-01-05 03:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:24.620464 | orchestrator | 2026-01-05 03:05:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:24.621702 | orchestrator | 2026-01-05 03:05:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:24.621763 | orchestrator | 2026-01-05 03:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:27.669249 | orchestrator | 2026-01-05 03:05:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:27.671083 | orchestrator | 2026-01-05 03:05:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:27.671154 | orchestrator | 2026-01-05 03:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:30.720748 | orchestrator | 2026-01-05 03:05:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:30.720903 | orchestrator | 2026-01-05 03:05:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:30.720916 | orchestrator | 2026-01-05 03:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:33.768340 | orchestrator | 2026-01-05 03:05:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:33.772511 | orchestrator | 2026-01-05 03:05:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:33.772654 | orchestrator | 2026-01-05 03:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:36.831321 | orchestrator | 2026-01-05 03:05:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:36.833663 | orchestrator | 2026-01-05 03:05:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:36.833734 | orchestrator | 2026-01-05 03:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:39.882116 | orchestrator | 2026-01-05 03:05:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:39.883906 | orchestrator | 2026-01-05 03:05:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:39.884007 | orchestrator | 2026-01-05 03:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:42.935428 | orchestrator | 2026-01-05 03:05:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:42.937243 | orchestrator | 2026-01-05 03:05:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:42.937537 | orchestrator | 2026-01-05 03:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:45.987676 | orchestrator | 2026-01-05 03:05:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:45.989883 | orchestrator | 2026-01-05 03:05:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:45.989947 | orchestrator | 2026-01-05 03:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:49.043673 | orchestrator | 2026-01-05 03:05:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:49.043872 | orchestrator | 2026-01-05 03:05:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:49.043892 | orchestrator | 2026-01-05 03:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:52.091521 | orchestrator | 2026-01-05 03:05:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:52.092051 | orchestrator | 2026-01-05 03:05:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:52.092075 | orchestrator | 2026-01-05 03:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:55.142284 | orchestrator | 2026-01-05 03:05:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:55.142447 | orchestrator | 2026-01-05 03:05:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:55.142458 | orchestrator | 2026-01-05 03:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:05:58.195401 | orchestrator | 2026-01-05 03:05:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:05:58.197569 | orchestrator | 2026-01-05 03:05:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:05:58.197629 | orchestrator | 2026-01-05 03:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:01.248723 | orchestrator | 2026-01-05 03:06:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:01.250410 | orchestrator | 2026-01-05 03:06:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:01.250464 | orchestrator | 2026-01-05 03:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:04.299566 | orchestrator | 2026-01-05 03:06:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:04.300232 | orchestrator | 2026-01-05 03:06:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:04.300539 | orchestrator | 2026-01-05 03:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:07.349951 | orchestrator | 2026-01-05 03:06:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:07.350999 | orchestrator | 2026-01-05 03:06:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:07.351085 | orchestrator | 2026-01-05 03:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:10.406409 | orchestrator | 2026-01-05 03:06:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:10.409377 | orchestrator | 2026-01-05 03:06:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:10.409471 | orchestrator | 2026-01-05 03:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:13.455146 | orchestrator | 2026-01-05 03:06:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:13.456302 | orchestrator | 2026-01-05 03:06:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:13.456367 | orchestrator | 2026-01-05 03:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:16.504193 | orchestrator | 2026-01-05 03:06:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:16.506192 | orchestrator | 2026-01-05 03:06:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:16.506291 | orchestrator | 2026-01-05 03:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:19.550894 | orchestrator | 2026-01-05 03:06:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:19.551909 | orchestrator | 2026-01-05 03:06:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:19.552368 | orchestrator | 2026-01-05 03:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:22.600007 | orchestrator | 2026-01-05 03:06:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:22.603067 | orchestrator | 2026-01-05 03:06:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:22.603127 | orchestrator | 2026-01-05 03:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:25.657131 | orchestrator | 2026-01-05 03:06:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:25.660147 | orchestrator | 2026-01-05 03:06:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:25.660420 | orchestrator | 2026-01-05 03:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:28.710506 | orchestrator | 2026-01-05 03:06:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:28.711663 | orchestrator | 2026-01-05 03:06:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:28.711789 | orchestrator | 2026-01-05 03:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:31.753206 | orchestrator | 2026-01-05 03:06:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:31.755938 | orchestrator | 2026-01-05 03:06:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:31.756010 | orchestrator | 2026-01-05 03:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:34.807932 | orchestrator | 2026-01-05 03:06:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:34.810271 | orchestrator | 2026-01-05 03:06:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:34.810364 | orchestrator | 2026-01-05 03:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:37.861075 | orchestrator | 2026-01-05 03:06:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:37.862772 | orchestrator | 2026-01-05 03:06:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:37.862907 | orchestrator | 2026-01-05 03:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:40.915172 | orchestrator | 2026-01-05 03:06:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:40.918654 | orchestrator | 2026-01-05 03:06:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:40.918773 | orchestrator | 2026-01-05 03:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:43.966005 | orchestrator | 2026-01-05 03:06:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:43.967328 | orchestrator | 2026-01-05 03:06:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:43.967371 | orchestrator | 2026-01-05 03:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:47.015745 | orchestrator | 2026-01-05 03:06:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:47.017177 | orchestrator | 2026-01-05 03:06:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:47.017238 | orchestrator | 2026-01-05 03:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:50.072723 | orchestrator | 2026-01-05 03:06:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:50.074603 | orchestrator | 2026-01-05 03:06:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:50.074718 | orchestrator | 2026-01-05 03:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:53.116271 | orchestrator | 2026-01-05 03:06:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:53.119391 | orchestrator | 2026-01-05 03:06:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:53.119516 | orchestrator | 2026-01-05 03:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:56.169050 | orchestrator | 2026-01-05 03:06:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:56.169781 | orchestrator | 2026-01-05 03:06:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:56.169835 | orchestrator | 2026-01-05 03:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:06:59.222486 | orchestrator | 2026-01-05 03:06:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:06:59.224522 | orchestrator | 2026-01-05 03:06:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:06:59.224610 | orchestrator | 2026-01-05 03:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:02.280047 | orchestrator | 2026-01-05 03:07:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:02.282481 | orchestrator | 2026-01-05 03:07:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:02.282549 | orchestrator | 2026-01-05 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:05.338244 | orchestrator | 2026-01-05 03:07:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:05.341672 | orchestrator | 2026-01-05 03:07:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:05.342263 | orchestrator | 2026-01-05 03:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:08.395577 | orchestrator | 2026-01-05 03:07:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:08.396999 | orchestrator | 2026-01-05 03:07:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:08.397159 | orchestrator | 2026-01-05 03:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:11.440653 | orchestrator | 2026-01-05 03:07:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:11.442072 | orchestrator | 2026-01-05 03:07:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:11.442125 | orchestrator | 2026-01-05 03:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:14.495946 | orchestrator | 2026-01-05 03:07:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:14.497612 | orchestrator | 2026-01-05 03:07:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:14.497653 | orchestrator | 2026-01-05 03:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:17.543333 | orchestrator | 2026-01-05 03:07:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:17.544679 | orchestrator | 2026-01-05 03:07:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:17.544718 | orchestrator | 2026-01-05 03:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:20.597788 | orchestrator | 2026-01-05 03:07:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:20.598520 | orchestrator | 2026-01-05 03:07:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:20.598553 | orchestrator | 2026-01-05 03:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:23.648557 | orchestrator | 2026-01-05 03:07:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:23.649247 | orchestrator | 2026-01-05 03:07:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:23.649272 | orchestrator | 2026-01-05 03:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:26.700029 | orchestrator | 2026-01-05 03:07:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:26.702872 | orchestrator | 2026-01-05 03:07:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:26.703093 | orchestrator | 2026-01-05 03:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:29.750517 | orchestrator | 2026-01-05 03:07:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:29.751624 | orchestrator | 2026-01-05 03:07:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:29.751661 | orchestrator | 2026-01-05 03:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:32.803812 | orchestrator | 2026-01-05 03:07:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:32.805757 | orchestrator | 2026-01-05 03:07:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:32.805800 | orchestrator | 2026-01-05 03:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:35.854956 | orchestrator | 2026-01-05 03:07:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:35.856203 | orchestrator | 2026-01-05 03:07:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:35.856363 | orchestrator | 2026-01-05 03:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:38.906539 | orchestrator | 2026-01-05 03:07:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:38.908375 | orchestrator | 2026-01-05 03:07:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:38.908513 | orchestrator | 2026-01-05 03:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:41.970258 | orchestrator | 2026-01-05 03:07:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:41.972246 | orchestrator | 2026-01-05 03:07:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:41.972357 | orchestrator | 2026-01-05 03:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:45.025889 | orchestrator | 2026-01-05 03:07:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:45.027238 | orchestrator | 2026-01-05 03:07:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:45.027399 | orchestrator | 2026-01-05 03:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:48.080351 | orchestrator | 2026-01-05 03:07:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:48.082676 | orchestrator | 2026-01-05 03:07:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:48.082849 | orchestrator | 2026-01-05 03:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:51.128043 | orchestrator | 2026-01-05 03:07:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:51.129376 | orchestrator | 2026-01-05 03:07:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:51.129511 | orchestrator | 2026-01-05 03:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:54.177717 | orchestrator | 2026-01-05 03:07:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:54.179392 | orchestrator | 2026-01-05 03:07:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:54.179448 | orchestrator | 2026-01-05 03:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:07:57.234210 | orchestrator | 2026-01-05 03:07:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:07:57.235606 | orchestrator | 2026-01-05 03:07:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:07:57.235657 | orchestrator | 2026-01-05 03:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:00.281863 | orchestrator | 2026-01-05 03:08:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:00.284857 | orchestrator | 2026-01-05 03:08:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:00.284917 | orchestrator | 2026-01-05 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:03.330764 | orchestrator | 2026-01-05 03:08:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:03.332942 | orchestrator | 2026-01-05 03:08:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:03.332983 | orchestrator | 2026-01-05 03:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:06.380755 | orchestrator | 2026-01-05 03:08:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:06.383323 | orchestrator | 2026-01-05 03:08:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:06.383427 | orchestrator | 2026-01-05 03:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:09.431511 | orchestrator | 2026-01-05 03:08:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:09.432911 | orchestrator | 2026-01-05 03:08:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:09.433067 | orchestrator | 2026-01-05 03:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:12.480479 | orchestrator | 2026-01-05 03:08:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:12.480899 | orchestrator | 2026-01-05 03:08:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:12.480939 | orchestrator | 2026-01-05 03:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:15.526551 | orchestrator | 2026-01-05 03:08:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:15.529793 | orchestrator | 2026-01-05 03:08:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:15.529848 | orchestrator | 2026-01-05 03:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:18.585526 | orchestrator | 2026-01-05 03:08:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:18.587997 | orchestrator | 2026-01-05 03:08:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:18.588087 | orchestrator | 2026-01-05 03:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:21.641081 | orchestrator | 2026-01-05 03:08:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:21.643543 | orchestrator | 2026-01-05 03:08:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:21.643596 | orchestrator | 2026-01-05 03:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:24.694709 | orchestrator | 2026-01-05 03:08:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:24.697143 | orchestrator | 2026-01-05 03:08:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:24.697213 | orchestrator | 2026-01-05 03:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:27.761537 | orchestrator | 2026-01-05 03:08:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:27.764325 | orchestrator | 2026-01-05 03:08:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:27.764426 | orchestrator | 2026-01-05 03:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:30.805450 | orchestrator | 2026-01-05 03:08:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:30.807921 | orchestrator | 2026-01-05 03:08:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:30.807974 | orchestrator | 2026-01-05 03:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:33.862167 | orchestrator | 2026-01-05 03:08:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:33.864569 | orchestrator | 2026-01-05 03:08:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:33.864646 | orchestrator | 2026-01-05 03:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:36.918375 | orchestrator | 2026-01-05 03:08:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:36.919252 | orchestrator | 2026-01-05 03:08:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:36.919528 | orchestrator | 2026-01-05 03:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:39.966718 | orchestrator | 2026-01-05 03:08:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:39.969027 | orchestrator | 2026-01-05 03:08:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:39.969114 | orchestrator | 2026-01-05 03:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:43.025324 | orchestrator | 2026-01-05 03:08:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:43.027098 | orchestrator | 2026-01-05 03:08:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:43.027157 | orchestrator | 2026-01-05 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:46.079863 | orchestrator | 2026-01-05 03:08:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:46.081967 | orchestrator | 2026-01-05 03:08:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:46.082082 | orchestrator | 2026-01-05 03:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:49.133811 | orchestrator | 2026-01-05 03:08:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:49.134772 | orchestrator | 2026-01-05 03:08:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:49.135082 | orchestrator | 2026-01-05 03:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:52.191835 | orchestrator | 2026-01-05 03:08:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:52.193550 | orchestrator | 2026-01-05 03:08:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:52.193619 | orchestrator | 2026-01-05 03:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:55.235644 | orchestrator | 2026-01-05 03:08:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:55.236894 | orchestrator | 2026-01-05 03:08:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:55.237566 | orchestrator | 2026-01-05 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:08:58.294540 | orchestrator | 2026-01-05 03:08:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:08:58.295822 | orchestrator | 2026-01-05 03:08:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:08:58.295861 | orchestrator | 2026-01-05 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:01.353774 | orchestrator | 2026-01-05 03:09:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:01.356491 | orchestrator | 2026-01-05 03:09:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:01.356590 | orchestrator | 2026-01-05 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:04.406291 | orchestrator | 2026-01-05 03:09:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:04.408498 | orchestrator | 2026-01-05 03:09:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:04.408553 | orchestrator | 2026-01-05 03:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:07.458608 | orchestrator | 2026-01-05 03:09:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:07.459462 | orchestrator | 2026-01-05 03:09:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:07.459520 | orchestrator | 2026-01-05 03:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:10.506726 | orchestrator | 2026-01-05 03:09:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:10.506859 | orchestrator | 2026-01-05 03:09:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:10.506869 | orchestrator | 2026-01-05 03:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:13.550105 | orchestrator | 2026-01-05 03:09:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:13.551242 | orchestrator | 2026-01-05 03:09:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:13.551298 | orchestrator | 2026-01-05 03:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:16.598852 | orchestrator | 2026-01-05 03:09:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:16.602116 | orchestrator | 2026-01-05 03:09:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:16.602286 | orchestrator | 2026-01-05 03:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:19.649474 | orchestrator | 2026-01-05 03:09:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:19.650972 | orchestrator | 2026-01-05 03:09:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:19.651082 | orchestrator | 2026-01-05 03:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:22.703966 | orchestrator | 2026-01-05 03:09:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:22.706209 | orchestrator | 2026-01-05 03:09:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:22.706263 | orchestrator | 2026-01-05 03:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:25.747885 | orchestrator | 2026-01-05 03:09:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:25.749634 | orchestrator | 2026-01-05 03:09:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:25.749707 | orchestrator | 2026-01-05 03:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:28.796038 | orchestrator | 2026-01-05 03:09:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:28.798172 | orchestrator | 2026-01-05 03:09:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:28.798265 | orchestrator | 2026-01-05 03:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:31.851688 | orchestrator | 2026-01-05 03:09:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:31.853908 | orchestrator | 2026-01-05 03:09:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:31.854396 | orchestrator | 2026-01-05 03:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:34.904794 | orchestrator | 2026-01-05 03:09:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:34.906061 | orchestrator | 2026-01-05 03:09:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:34.906132 | orchestrator | 2026-01-05 03:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:37.956874 | orchestrator | 2026-01-05 03:09:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:37.958579 | orchestrator | 2026-01-05 03:09:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:37.958620 | orchestrator | 2026-01-05 03:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:41.006453 | orchestrator | 2026-01-05 03:09:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:41.008068 | orchestrator | 2026-01-05 03:09:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:41.008154 | orchestrator | 2026-01-05 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:44.050559 | orchestrator | 2026-01-05 03:09:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:44.053332 | orchestrator | 2026-01-05 03:09:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:44.053420 | orchestrator | 2026-01-05 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:47.096424 | orchestrator | 2026-01-05 03:09:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:47.096906 | orchestrator | 2026-01-05 03:09:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:47.097023 | orchestrator | 2026-01-05 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:50.144369 | orchestrator | 2026-01-05 03:09:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:50.145711 | orchestrator | 2026-01-05 03:09:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:50.145758 | orchestrator | 2026-01-05 03:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:53.201147 | orchestrator | 2026-01-05 03:09:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:53.201815 | orchestrator | 2026-01-05 03:09:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:53.201915 | orchestrator | 2026-01-05 03:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:56.250615 | orchestrator | 2026-01-05 03:09:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:56.253333 | orchestrator | 2026-01-05 03:09:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:56.253596 | orchestrator | 2026-01-05 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:09:59.301170 | orchestrator | 2026-01-05 03:09:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:09:59.303846 | orchestrator | 2026-01-05 03:09:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:09:59.303892 | orchestrator | 2026-01-05 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:02.356757 | orchestrator | 2026-01-05 03:10:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:02.357021 | orchestrator | 2026-01-05 03:10:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:02.357283 | orchestrator | 2026-01-05 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:05.399352 | orchestrator | 2026-01-05 03:10:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:05.401361 | orchestrator | 2026-01-05 03:10:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:05.401450 | orchestrator | 2026-01-05 03:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:08.453421 | orchestrator | 2026-01-05 03:10:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:08.454468 | orchestrator | 2026-01-05 03:10:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:08.454553 | orchestrator | 2026-01-05 03:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:11.503166 | orchestrator | 2026-01-05 03:10:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:11.505649 | orchestrator | 2026-01-05 03:10:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:11.505678 | orchestrator | 2026-01-05 03:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:14.553182 | orchestrator | 2026-01-05 03:10:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:14.554894 | orchestrator | 2026-01-05 03:10:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:14.554957 | orchestrator | 2026-01-05 03:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:17.602888 | orchestrator | 2026-01-05 03:10:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:17.604636 | orchestrator | 2026-01-05 03:10:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:17.604683 | orchestrator | 2026-01-05 03:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:20.655737 | orchestrator | 2026-01-05 03:10:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:20.657399 | orchestrator | 2026-01-05 03:10:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:20.657434 | orchestrator | 2026-01-05 03:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:23.713628 | orchestrator | 2026-01-05 03:10:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:23.715163 | orchestrator | 2026-01-05 03:10:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:23.715262 | orchestrator | 2026-01-05 03:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:26.762785 | orchestrator | 2026-01-05 03:10:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:26.764160 | orchestrator | 2026-01-05 03:10:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:26.764217 | orchestrator | 2026-01-05 03:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:29.813070 | orchestrator | 2026-01-05 03:10:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:29.814578 | orchestrator | 2026-01-05 03:10:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:29.814633 | orchestrator | 2026-01-05 03:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:32.865928 | orchestrator | 2026-01-05 03:10:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:32.867973 | orchestrator | 2026-01-05 03:10:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:32.868045 | orchestrator | 2026-01-05 03:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:35.917767 | orchestrator | 2026-01-05 03:10:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:35.918370 | orchestrator | 2026-01-05 03:10:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:35.918418 | orchestrator | 2026-01-05 03:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:38.971914 | orchestrator | 2026-01-05 03:10:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:38.973392 | orchestrator | 2026-01-05 03:10:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:38.973442 | orchestrator | 2026-01-05 03:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:42.020343 | orchestrator | 2026-01-05 03:10:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:42.022069 | orchestrator | 2026-01-05 03:10:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:42.022143 | orchestrator | 2026-01-05 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:45.071109 | orchestrator | 2026-01-05 03:10:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:45.074530 | orchestrator | 2026-01-05 03:10:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:45.074660 | orchestrator | 2026-01-05 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:48.127350 | orchestrator | 2026-01-05 03:10:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:48.127828 | orchestrator | 2026-01-05 03:10:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:48.127852 | orchestrator | 2026-01-05 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:51.176808 | orchestrator | 2026-01-05 03:10:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:51.179219 | orchestrator | 2026-01-05 03:10:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:51.179364 | orchestrator | 2026-01-05 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:54.223274 | orchestrator | 2026-01-05 03:10:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:54.224673 | orchestrator | 2026-01-05 03:10:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:54.224719 | orchestrator | 2026-01-05 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:10:57.268669 | orchestrator | 2026-01-05 03:10:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:10:57.270874 | orchestrator | 2026-01-05 03:10:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:10:57.270936 | orchestrator | 2026-01-05 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:00.319697 | orchestrator | 2026-01-05 03:11:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:00.320367 | orchestrator | 2026-01-05 03:11:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:00.320503 | orchestrator | 2026-01-05 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:03.380831 | orchestrator | 2026-01-05 03:11:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:03.382094 | orchestrator | 2026-01-05 03:11:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:03.382150 | orchestrator | 2026-01-05 03:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:06.428923 | orchestrator | 2026-01-05 03:11:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:06.432107 | orchestrator | 2026-01-05 03:11:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:06.432203 | orchestrator | 2026-01-05 03:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:09.483216 | orchestrator | 2026-01-05 03:11:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:09.483906 | orchestrator | 2026-01-05 03:11:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:09.483990 | orchestrator | 2026-01-05 03:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:12.526850 | orchestrator | 2026-01-05 03:11:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:12.528153 | orchestrator | 2026-01-05 03:11:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:12.528207 | orchestrator | 2026-01-05 03:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:15.568616 | orchestrator | 2026-01-05 03:11:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:15.569685 | orchestrator | 2026-01-05 03:11:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:15.570249 | orchestrator | 2026-01-05 03:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:18.627275 | orchestrator | 2026-01-05 03:11:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:18.627407 | orchestrator | 2026-01-05 03:11:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:18.627435 | orchestrator | 2026-01-05 03:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:21.680803 | orchestrator | 2026-01-05 03:11:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:21.680900 | orchestrator | 2026-01-05 03:11:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:21.680959 | orchestrator | 2026-01-05 03:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:24.732562 | orchestrator | 2026-01-05 03:11:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:24.734168 | orchestrator | 2026-01-05 03:11:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:24.734326 | orchestrator | 2026-01-05 03:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:27.780389 | orchestrator | 2026-01-05 03:11:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:27.782566 | orchestrator | 2026-01-05 03:11:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:27.782611 | orchestrator | 2026-01-05 03:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:30.838101 | orchestrator | 2026-01-05 03:11:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:30.840063 | orchestrator | 2026-01-05 03:11:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:30.840142 | orchestrator | 2026-01-05 03:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:33.899129 | orchestrator | 2026-01-05 03:11:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:33.899318 | orchestrator | 2026-01-05 03:11:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:33.899335 | orchestrator | 2026-01-05 03:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:36.946710 | orchestrator | 2026-01-05 03:11:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:36.947848 | orchestrator | 2026-01-05 03:11:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:36.947943 | orchestrator | 2026-01-05 03:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:40.001087 | orchestrator | 2026-01-05 03:11:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:40.001774 | orchestrator | 2026-01-05 03:11:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:40.001829 | orchestrator | 2026-01-05 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:43.050189 | orchestrator | 2026-01-05 03:11:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:43.050267 | orchestrator | 2026-01-05 03:11:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:43.050273 | orchestrator | 2026-01-05 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:46.101238 | orchestrator | 2026-01-05 03:11:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:46.102399 | orchestrator | 2026-01-05 03:11:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:46.102447 | orchestrator | 2026-01-05 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:49.157931 | orchestrator | 2026-01-05 03:11:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:49.158394 | orchestrator | 2026-01-05 03:11:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:49.158423 | orchestrator | 2026-01-05 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:52.209268 | orchestrator | 2026-01-05 03:11:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:52.211049 | orchestrator | 2026-01-05 03:11:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:52.211127 | orchestrator | 2026-01-05 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:55.267964 | orchestrator | 2026-01-05 03:11:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:55.271520 | orchestrator | 2026-01-05 03:11:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:55.271636 | orchestrator | 2026-01-05 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:11:58.319640 | orchestrator | 2026-01-05 03:11:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:11:58.321154 | orchestrator | 2026-01-05 03:11:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:11:58.321219 | orchestrator | 2026-01-05 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:01.366420 | orchestrator | 2026-01-05 03:12:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:01.368201 | orchestrator | 2026-01-05 03:12:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:01.368271 | orchestrator | 2026-01-05 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:04.424741 | orchestrator | 2026-01-05 03:12:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:04.426155 | orchestrator | 2026-01-05 03:12:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:04.426211 | orchestrator | 2026-01-05 03:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:07.473477 | orchestrator | 2026-01-05 03:12:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:07.475608 | orchestrator | 2026-01-05 03:12:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:07.475671 | orchestrator | 2026-01-05 03:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:10.520667 | orchestrator | 2026-01-05 03:12:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:10.522929 | orchestrator | 2026-01-05 03:12:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:10.523013 | orchestrator | 2026-01-05 03:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:13.571825 | orchestrator | 2026-01-05 03:12:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:13.572561 | orchestrator | 2026-01-05 03:12:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:13.572608 | orchestrator | 2026-01-05 03:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:16.627604 | orchestrator | 2026-01-05 03:12:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:16.629519 | orchestrator | 2026-01-05 03:12:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:16.629619 | orchestrator | 2026-01-05 03:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:19.680256 | orchestrator | 2026-01-05 03:12:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:19.681284 | orchestrator | 2026-01-05 03:12:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:19.681566 | orchestrator | 2026-01-05 03:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:22.725928 | orchestrator | 2026-01-05 03:12:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:22.726236 | orchestrator | 2026-01-05 03:12:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:22.726426 | orchestrator | 2026-01-05 03:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:25.771420 | orchestrator | 2026-01-05 03:12:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:25.774213 | orchestrator | 2026-01-05 03:12:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:25.774292 | orchestrator | 2026-01-05 03:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:28.819698 | orchestrator | 2026-01-05 03:12:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:28.821773 | orchestrator | 2026-01-05 03:12:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:28.821849 | orchestrator | 2026-01-05 03:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:31.869219 | orchestrator | 2026-01-05 03:12:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:31.870886 | orchestrator | 2026-01-05 03:12:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:31.870943 | orchestrator | 2026-01-05 03:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:34.921313 | orchestrator | 2026-01-05 03:12:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:34.923721 | orchestrator | 2026-01-05 03:12:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:34.923879 | orchestrator | 2026-01-05 03:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:37.973257 | orchestrator | 2026-01-05 03:12:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:37.974954 | orchestrator | 2026-01-05 03:12:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:37.974981 | orchestrator | 2026-01-05 03:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:41.029784 | orchestrator | 2026-01-05 03:12:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:41.031653 | orchestrator | 2026-01-05 03:12:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:41.031736 | orchestrator | 2026-01-05 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:44.085095 | orchestrator | 2026-01-05 03:12:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:44.086171 | orchestrator | 2026-01-05 03:12:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:44.086302 | orchestrator | 2026-01-05 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:47.133149 | orchestrator | 2026-01-05 03:12:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:47.134309 | orchestrator | 2026-01-05 03:12:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:47.134346 | orchestrator | 2026-01-05 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:50.187457 | orchestrator | 2026-01-05 03:12:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:50.188554 | orchestrator | 2026-01-05 03:12:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:50.188642 | orchestrator | 2026-01-05 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:53.242534 | orchestrator | 2026-01-05 03:12:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:53.244194 | orchestrator | 2026-01-05 03:12:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:53.244311 | orchestrator | 2026-01-05 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:56.293091 | orchestrator | 2026-01-05 03:12:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:56.294266 | orchestrator | 2026-01-05 03:12:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:56.294319 | orchestrator | 2026-01-05 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:12:59.348842 | orchestrator | 2026-01-05 03:12:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:12:59.350376 | orchestrator | 2026-01-05 03:12:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:12:59.350448 | orchestrator | 2026-01-05 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:02.407108 | orchestrator | 2026-01-05 03:13:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:02.408875 | orchestrator | 2026-01-05 03:13:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:02.409062 | orchestrator | 2026-01-05 03:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:05.465380 | orchestrator | 2026-01-05 03:13:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:05.467224 | orchestrator | 2026-01-05 03:13:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:05.467331 | orchestrator | 2026-01-05 03:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:08.518840 | orchestrator | 2026-01-05 03:13:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:08.520340 | orchestrator | 2026-01-05 03:13:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:08.520412 | orchestrator | 2026-01-05 03:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:11.563540 | orchestrator | 2026-01-05 03:13:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:11.565298 | orchestrator | 2026-01-05 03:13:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:11.565328 | orchestrator | 2026-01-05 03:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:14.625598 | orchestrator | 2026-01-05 03:13:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:14.626989 | orchestrator | 2026-01-05 03:13:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:14.627037 | orchestrator | 2026-01-05 03:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:17.675653 | orchestrator | 2026-01-05 03:13:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:17.679634 | orchestrator | 2026-01-05 03:13:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:17.679722 | orchestrator | 2026-01-05 03:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:20.736985 | orchestrator | 2026-01-05 03:13:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:20.738518 | orchestrator | 2026-01-05 03:13:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:20.738617 | orchestrator | 2026-01-05 03:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:23.780914 | orchestrator | 2026-01-05 03:13:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:23.782106 | orchestrator | 2026-01-05 03:13:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:23.782166 | orchestrator | 2026-01-05 03:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:26.836851 | orchestrator | 2026-01-05 03:13:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:26.837716 | orchestrator | 2026-01-05 03:13:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:26.837764 | orchestrator | 2026-01-05 03:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:29.893148 | orchestrator | 2026-01-05 03:13:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:29.897540 | orchestrator | 2026-01-05 03:13:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:29.897788 | orchestrator | 2026-01-05 03:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:32.953408 | orchestrator | 2026-01-05 03:13:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:32.955568 | orchestrator | 2026-01-05 03:13:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:32.956225 | orchestrator | 2026-01-05 03:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:36.012814 | orchestrator | 2026-01-05 03:13:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:36.015672 | orchestrator | 2026-01-05 03:13:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:36.015767 | orchestrator | 2026-01-05 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:39.069314 | orchestrator | 2026-01-05 03:13:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:39.069413 | orchestrator | 2026-01-05 03:13:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:39.069424 | orchestrator | 2026-01-05 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:42.134010 | orchestrator | 2026-01-05 03:13:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:42.135722 | orchestrator | 2026-01-05 03:13:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:42.136129 | orchestrator | 2026-01-05 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:45.180966 | orchestrator | 2026-01-05 03:13:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:45.183300 | orchestrator | 2026-01-05 03:13:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:45.183377 | orchestrator | 2026-01-05 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:48.236259 | orchestrator | 2026-01-05 03:13:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:48.239185 | orchestrator | 2026-01-05 03:13:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:48.239267 | orchestrator | 2026-01-05 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:51.296390 | orchestrator | 2026-01-05 03:13:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:51.299947 | orchestrator | 2026-01-05 03:13:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:51.300079 | orchestrator | 2026-01-05 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:54.355413 | orchestrator | 2026-01-05 03:13:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:54.357991 | orchestrator | 2026-01-05 03:13:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:54.358089 | orchestrator | 2026-01-05 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:13:57.412574 | orchestrator | 2026-01-05 03:13:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:13:57.414400 | orchestrator | 2026-01-05 03:13:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:13:57.414455 | orchestrator | 2026-01-05 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:00.471534 | orchestrator | 2026-01-05 03:14:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:00.474463 | orchestrator | 2026-01-05 03:14:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:00.474573 | orchestrator | 2026-01-05 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:03.524288 | orchestrator | 2026-01-05 03:14:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:03.526536 | orchestrator | 2026-01-05 03:14:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:03.526606 | orchestrator | 2026-01-05 03:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:06.581249 | orchestrator | 2026-01-05 03:14:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:06.583425 | orchestrator | 2026-01-05 03:14:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:06.583508 | orchestrator | 2026-01-05 03:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:09.635167 | orchestrator | 2026-01-05 03:14:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:09.636280 | orchestrator | 2026-01-05 03:14:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:09.636313 | orchestrator | 2026-01-05 03:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:12.688057 | orchestrator | 2026-01-05 03:14:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:12.691140 | orchestrator | 2026-01-05 03:14:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:12.691270 | orchestrator | 2026-01-05 03:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:15.738463 | orchestrator | 2026-01-05 03:14:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:15.741032 | orchestrator | 2026-01-05 03:14:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:15.741106 | orchestrator | 2026-01-05 03:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:18.792123 | orchestrator | 2026-01-05 03:14:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:18.794302 | orchestrator | 2026-01-05 03:14:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:18.794357 | orchestrator | 2026-01-05 03:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:21.849460 | orchestrator | 2026-01-05 03:14:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:21.850905 | orchestrator | 2026-01-05 03:14:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:21.850936 | orchestrator | 2026-01-05 03:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:24.899926 | orchestrator | 2026-01-05 03:14:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:24.901996 | orchestrator | 2026-01-05 03:14:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:24.902080 | orchestrator | 2026-01-05 03:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:27.954505 | orchestrator | 2026-01-05 03:14:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:27.955217 | orchestrator | 2026-01-05 03:14:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:27.955272 | orchestrator | 2026-01-05 03:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:31.007358 | orchestrator | 2026-01-05 03:14:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:31.009327 | orchestrator | 2026-01-05 03:14:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:31.009389 | orchestrator | 2026-01-05 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:34.059219 | orchestrator | 2026-01-05 03:14:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:34.061790 | orchestrator | 2026-01-05 03:14:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:34.061898 | orchestrator | 2026-01-05 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:37.113008 | orchestrator | 2026-01-05 03:14:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:37.115567 | orchestrator | 2026-01-05 03:14:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:37.115693 | orchestrator | 2026-01-05 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:40.150977 | orchestrator | 2026-01-05 03:14:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:40.152916 | orchestrator | 2026-01-05 03:14:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:40.153018 | orchestrator | 2026-01-05 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:43.205994 | orchestrator | 2026-01-05 03:14:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:43.208365 | orchestrator | 2026-01-05 03:14:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:43.208422 | orchestrator | 2026-01-05 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:46.255846 | orchestrator | 2026-01-05 03:14:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:46.257956 | orchestrator | 2026-01-05 03:14:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:46.258178 | orchestrator | 2026-01-05 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:49.302461 | orchestrator | 2026-01-05 03:14:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:49.304397 | orchestrator | 2026-01-05 03:14:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:49.304431 | orchestrator | 2026-01-05 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:52.362291 | orchestrator | 2026-01-05 03:14:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:52.364121 | orchestrator | 2026-01-05 03:14:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:52.364199 | orchestrator | 2026-01-05 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:55.412293 | orchestrator | 2026-01-05 03:14:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:55.413724 | orchestrator | 2026-01-05 03:14:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:55.413781 | orchestrator | 2026-01-05 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:14:58.463737 | orchestrator | 2026-01-05 03:14:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:14:58.466007 | orchestrator | 2026-01-05 03:14:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:14:58.466134 | orchestrator | 2026-01-05 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:01.519882 | orchestrator | 2026-01-05 03:15:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:01.521551 | orchestrator | 2026-01-05 03:15:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:01.521599 | orchestrator | 2026-01-05 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:04.562907 | orchestrator | 2026-01-05 03:15:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:04.563736 | orchestrator | 2026-01-05 03:15:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:04.563774 | orchestrator | 2026-01-05 03:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:07.607347 | orchestrator | 2026-01-05 03:15:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:07.607978 | orchestrator | 2026-01-05 03:15:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:07.607999 | orchestrator | 2026-01-05 03:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:10.655152 | orchestrator | 2026-01-05 03:15:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:10.657532 | orchestrator | 2026-01-05 03:15:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:10.657668 | orchestrator | 2026-01-05 03:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:13.703075 | orchestrator | 2026-01-05 03:15:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:13.705929 | orchestrator | 2026-01-05 03:15:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:13.705985 | orchestrator | 2026-01-05 03:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:16.755341 | orchestrator | 2026-01-05 03:15:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:16.758387 | orchestrator | 2026-01-05 03:15:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:16.758480 | orchestrator | 2026-01-05 03:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:19.801991 | orchestrator | 2026-01-05 03:15:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:19.804050 | orchestrator | 2026-01-05 03:15:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:19.804101 | orchestrator | 2026-01-05 03:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:22.856999 | orchestrator | 2026-01-05 03:15:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:22.858859 | orchestrator | 2026-01-05 03:15:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:22.858912 | orchestrator | 2026-01-05 03:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:25.913516 | orchestrator | 2026-01-05 03:15:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:25.915458 | orchestrator | 2026-01-05 03:15:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:25.915531 | orchestrator | 2026-01-05 03:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:28.968276 | orchestrator | 2026-01-05 03:15:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:28.970113 | orchestrator | 2026-01-05 03:15:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:28.970177 | orchestrator | 2026-01-05 03:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:32.023940 | orchestrator | 2026-01-05 03:15:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:32.026246 | orchestrator | 2026-01-05 03:15:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:32.026301 | orchestrator | 2026-01-05 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:35.072798 | orchestrator | 2026-01-05 03:15:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:35.074987 | orchestrator | 2026-01-05 03:15:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:35.075065 | orchestrator | 2026-01-05 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:38.125948 | orchestrator | 2026-01-05 03:15:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:38.126241 | orchestrator | 2026-01-05 03:15:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:38.126272 | orchestrator | 2026-01-05 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:41.165706 | orchestrator | 2026-01-05 03:15:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:41.165840 | orchestrator | 2026-01-05 03:15:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:41.165885 | orchestrator | 2026-01-05 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:44.217216 | orchestrator | 2026-01-05 03:15:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:44.219086 | orchestrator | 2026-01-05 03:15:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:44.219830 | orchestrator | 2026-01-05 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:47.272764 | orchestrator | 2026-01-05 03:15:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:47.274171 | orchestrator | 2026-01-05 03:15:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:47.274223 | orchestrator | 2026-01-05 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:50.320727 | orchestrator | 2026-01-05 03:15:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:50.322399 | orchestrator | 2026-01-05 03:15:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:50.322552 | orchestrator | 2026-01-05 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:53.362199 | orchestrator | 2026-01-05 03:15:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:53.365372 | orchestrator | 2026-01-05 03:15:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:53.365447 | orchestrator | 2026-01-05 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:56.414495 | orchestrator | 2026-01-05 03:15:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:56.416142 | orchestrator | 2026-01-05 03:15:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:56.416253 | orchestrator | 2026-01-05 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:15:59.460755 | orchestrator | 2026-01-05 03:15:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:15:59.462588 | orchestrator | 2026-01-05 03:15:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:15:59.462909 | orchestrator | 2026-01-05 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:02.514538 | orchestrator | 2026-01-05 03:16:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:02.516489 | orchestrator | 2026-01-05 03:16:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:02.516894 | orchestrator | 2026-01-05 03:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:05.562187 | orchestrator | 2026-01-05 03:16:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:05.563392 | orchestrator | 2026-01-05 03:16:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:05.563504 | orchestrator | 2026-01-05 03:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:08.615444 | orchestrator | 2026-01-05 03:16:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:08.616873 | orchestrator | 2026-01-05 03:16:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:08.616943 | orchestrator | 2026-01-05 03:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:11.659051 | orchestrator | 2026-01-05 03:16:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:11.662750 | orchestrator | 2026-01-05 03:16:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:11.662835 | orchestrator | 2026-01-05 03:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:14.711318 | orchestrator | 2026-01-05 03:16:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:14.713243 | orchestrator | 2026-01-05 03:16:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:14.713309 | orchestrator | 2026-01-05 03:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:17.759064 | orchestrator | 2026-01-05 03:16:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:17.760844 | orchestrator | 2026-01-05 03:16:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:17.760907 | orchestrator | 2026-01-05 03:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:20.815304 | orchestrator | 2026-01-05 03:16:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:20.815958 | orchestrator | 2026-01-05 03:16:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:20.815987 | orchestrator | 2026-01-05 03:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:23.868632 | orchestrator | 2026-01-05 03:16:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:23.870076 | orchestrator | 2026-01-05 03:16:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:23.870304 | orchestrator | 2026-01-05 03:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:26.923144 | orchestrator | 2026-01-05 03:16:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:26.924013 | orchestrator | 2026-01-05 03:16:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:26.924056 | orchestrator | 2026-01-05 03:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:29.973719 | orchestrator | 2026-01-05 03:16:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:29.975408 | orchestrator | 2026-01-05 03:16:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:29.975461 | orchestrator | 2026-01-05 03:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:33.027154 | orchestrator | 2026-01-05 03:16:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:33.028212 | orchestrator | 2026-01-05 03:16:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:33.028254 | orchestrator | 2026-01-05 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:36.078987 | orchestrator | 2026-01-05 03:16:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:36.080916 | orchestrator | 2026-01-05 03:16:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:36.080995 | orchestrator | 2026-01-05 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:39.124807 | orchestrator | 2026-01-05 03:16:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:39.126705 | orchestrator | 2026-01-05 03:16:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:39.126760 | orchestrator | 2026-01-05 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:42.183202 | orchestrator | 2026-01-05 03:16:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:42.185618 | orchestrator | 2026-01-05 03:16:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:42.185681 | orchestrator | 2026-01-05 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:45.240318 | orchestrator | 2026-01-05 03:16:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:45.241728 | orchestrator | 2026-01-05 03:16:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:45.241790 | orchestrator | 2026-01-05 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:48.292344 | orchestrator | 2026-01-05 03:16:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:48.293475 | orchestrator | 2026-01-05 03:16:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:48.293634 | orchestrator | 2026-01-05 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:51.343403 | orchestrator | 2026-01-05 03:16:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:51.343896 | orchestrator | 2026-01-05 03:16:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:51.343922 | orchestrator | 2026-01-05 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:54.394319 | orchestrator | 2026-01-05 03:16:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:54.395633 | orchestrator | 2026-01-05 03:16:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:54.395731 | orchestrator | 2026-01-05 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:16:57.448095 | orchestrator | 2026-01-05 03:16:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:16:57.450599 | orchestrator | 2026-01-05 03:16:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:16:57.450710 | orchestrator | 2026-01-05 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:00.508363 | orchestrator | 2026-01-05 03:17:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:00.509232 | orchestrator | 2026-01-05 03:17:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:00.509259 | orchestrator | 2026-01-05 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:03.557682 | orchestrator | 2026-01-05 03:17:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:03.559688 | orchestrator | 2026-01-05 03:17:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:03.559734 | orchestrator | 2026-01-05 03:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:06.608802 | orchestrator | 2026-01-05 03:17:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:06.609815 | orchestrator | 2026-01-05 03:17:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:06.611641 | orchestrator | 2026-01-05 03:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:09.661617 | orchestrator | 2026-01-05 03:17:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:09.663216 | orchestrator | 2026-01-05 03:17:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:09.663300 | orchestrator | 2026-01-05 03:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:12.713807 | orchestrator | 2026-01-05 03:17:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:12.716237 | orchestrator | 2026-01-05 03:17:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:12.716309 | orchestrator | 2026-01-05 03:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:15.774804 | orchestrator | 2026-01-05 03:17:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:15.775896 | orchestrator | 2026-01-05 03:17:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:15.776587 | orchestrator | 2026-01-05 03:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:18.824114 | orchestrator | 2026-01-05 03:17:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:18.825913 | orchestrator | 2026-01-05 03:17:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:18.826618 | orchestrator | 2026-01-05 03:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:21.878685 | orchestrator | 2026-01-05 03:17:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:21.880825 | orchestrator | 2026-01-05 03:17:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:21.880906 | orchestrator | 2026-01-05 03:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:24.929972 | orchestrator | 2026-01-05 03:17:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:24.932091 | orchestrator | 2026-01-05 03:17:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:24.932118 | orchestrator | 2026-01-05 03:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:27.983689 | orchestrator | 2026-01-05 03:17:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:27.985343 | orchestrator | 2026-01-05 03:17:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:27.985409 | orchestrator | 2026-01-05 03:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:31.041454 | orchestrator | 2026-01-05 03:17:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:31.042200 | orchestrator | 2026-01-05 03:17:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:31.042284 | orchestrator | 2026-01-05 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:34.094372 | orchestrator | 2026-01-05 03:17:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:34.095156 | orchestrator | 2026-01-05 03:17:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:34.095207 | orchestrator | 2026-01-05 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:37.146871 | orchestrator | 2026-01-05 03:17:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:37.148266 | orchestrator | 2026-01-05 03:17:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:37.148322 | orchestrator | 2026-01-05 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:40.184057 | orchestrator | 2026-01-05 03:17:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:40.186703 | orchestrator | 2026-01-05 03:17:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:40.186755 | orchestrator | 2026-01-05 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:43.230965 | orchestrator | 2026-01-05 03:17:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:43.231524 | orchestrator | 2026-01-05 03:17:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:43.231722 | orchestrator | 2026-01-05 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:46.276757 | orchestrator | 2026-01-05 03:17:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:46.279084 | orchestrator | 2026-01-05 03:17:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:46.279174 | orchestrator | 2026-01-05 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:49.325795 | orchestrator | 2026-01-05 03:17:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:49.329355 | orchestrator | 2026-01-05 03:17:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:49.329425 | orchestrator | 2026-01-05 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:52.381448 | orchestrator | 2026-01-05 03:17:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:52.384679 | orchestrator | 2026-01-05 03:17:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:52.384763 | orchestrator | 2026-01-05 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:55.435730 | orchestrator | 2026-01-05 03:17:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:55.436759 | orchestrator | 2026-01-05 03:17:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:55.436808 | orchestrator | 2026-01-05 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:17:58.501570 | orchestrator | 2026-01-05 03:17:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:17:58.502773 | orchestrator | 2026-01-05 03:17:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:17:58.503015 | orchestrator | 2026-01-05 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:01.559976 | orchestrator | 2026-01-05 03:18:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:01.562159 | orchestrator | 2026-01-05 03:18:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:01.562211 | orchestrator | 2026-01-05 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:04.611785 | orchestrator | 2026-01-05 03:18:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:04.614513 | orchestrator | 2026-01-05 03:18:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:04.614838 | orchestrator | 2026-01-05 03:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:07.668947 | orchestrator | 2026-01-05 03:18:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:07.671774 | orchestrator | 2026-01-05 03:18:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:07.671858 | orchestrator | 2026-01-05 03:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:10.720065 | orchestrator | 2026-01-05 03:18:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:10.721171 | orchestrator | 2026-01-05 03:18:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:10.721210 | orchestrator | 2026-01-05 03:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:13.774757 | orchestrator | 2026-01-05 03:18:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:13.775648 | orchestrator | 2026-01-05 03:18:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:13.775679 | orchestrator | 2026-01-05 03:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:16.824689 | orchestrator | 2026-01-05 03:18:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:16.826632 | orchestrator | 2026-01-05 03:18:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:16.826712 | orchestrator | 2026-01-05 03:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:19.876398 | orchestrator | 2026-01-05 03:18:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:19.878670 | orchestrator | 2026-01-05 03:18:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:19.878786 | orchestrator | 2026-01-05 03:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:22.928549 | orchestrator | 2026-01-05 03:18:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:22.930311 | orchestrator | 2026-01-05 03:18:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:22.930346 | orchestrator | 2026-01-05 03:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:25.980510 | orchestrator | 2026-01-05 03:18:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:25.981629 | orchestrator | 2026-01-05 03:18:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:25.981810 | orchestrator | 2026-01-05 03:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:29.033791 | orchestrator | 2026-01-05 03:18:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:29.034176 | orchestrator | 2026-01-05 03:18:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:29.034204 | orchestrator | 2026-01-05 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:32.081593 | orchestrator | 2026-01-05 03:18:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:32.081730 | orchestrator | 2026-01-05 03:18:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:32.081773 | orchestrator | 2026-01-05 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:35.138161 | orchestrator | 2026-01-05 03:18:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:35.140797 | orchestrator | 2026-01-05 03:18:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:35.140855 | orchestrator | 2026-01-05 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:38.186800 | orchestrator | 2026-01-05 03:18:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:38.188593 | orchestrator | 2026-01-05 03:18:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:38.188668 | orchestrator | 2026-01-05 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:41.242516 | orchestrator | 2026-01-05 03:18:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:41.243308 | orchestrator | 2026-01-05 03:18:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:41.243474 | orchestrator | 2026-01-05 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:44.294014 | orchestrator | 2026-01-05 03:18:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:44.294237 | orchestrator | 2026-01-05 03:18:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:44.294354 | orchestrator | 2026-01-05 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:47.337967 | orchestrator | 2026-01-05 03:18:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:47.342256 | orchestrator | 2026-01-05 03:18:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:47.342353 | orchestrator | 2026-01-05 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:50.392602 | orchestrator | 2026-01-05 03:18:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:50.397098 | orchestrator | 2026-01-05 03:18:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:50.397189 | orchestrator | 2026-01-05 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:53.441927 | orchestrator | 2026-01-05 03:18:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:53.443423 | orchestrator | 2026-01-05 03:18:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:53.443483 | orchestrator | 2026-01-05 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:56.487442 | orchestrator | 2026-01-05 03:18:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:56.489328 | orchestrator | 2026-01-05 03:18:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:56.489376 | orchestrator | 2026-01-05 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:18:59.543963 | orchestrator | 2026-01-05 03:18:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:18:59.545285 | orchestrator | 2026-01-05 03:18:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:18:59.545330 | orchestrator | 2026-01-05 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:02.591272 | orchestrator | 2026-01-05 03:19:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:02.593192 | orchestrator | 2026-01-05 03:19:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:02.593358 | orchestrator | 2026-01-05 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:05.642670 | orchestrator | 2026-01-05 03:19:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:05.643677 | orchestrator | 2026-01-05 03:19:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:05.643709 | orchestrator | 2026-01-05 03:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:08.691610 | orchestrator | 2026-01-05 03:19:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:08.693302 | orchestrator | 2026-01-05 03:19:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:08.693496 | orchestrator | 2026-01-05 03:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:11.740997 | orchestrator | 2026-01-05 03:19:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:11.743033 | orchestrator | 2026-01-05 03:19:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:11.743090 | orchestrator | 2026-01-05 03:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:14.795274 | orchestrator | 2026-01-05 03:19:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:14.797799 | orchestrator | 2026-01-05 03:19:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:14.797902 | orchestrator | 2026-01-05 03:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:17.846928 | orchestrator | 2026-01-05 03:19:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:17.850080 | orchestrator | 2026-01-05 03:19:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:17.850178 | orchestrator | 2026-01-05 03:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:20.891974 | orchestrator | 2026-01-05 03:19:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:20.893310 | orchestrator | 2026-01-05 03:19:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:20.893437 | orchestrator | 2026-01-05 03:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:23.939239 | orchestrator | 2026-01-05 03:19:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:23.940187 | orchestrator | 2026-01-05 03:19:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:23.940512 | orchestrator | 2026-01-05 03:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:26.991282 | orchestrator | 2026-01-05 03:19:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:26.992229 | orchestrator | 2026-01-05 03:19:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:26.992451 | orchestrator | 2026-01-05 03:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:30.038699 | orchestrator | 2026-01-05 03:19:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:30.043301 | orchestrator | 2026-01-05 03:19:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:30.043474 | orchestrator | 2026-01-05 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:33.089786 | orchestrator | 2026-01-05 03:19:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:33.091998 | orchestrator | 2026-01-05 03:19:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:33.092049 | orchestrator | 2026-01-05 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:36.148234 | orchestrator | 2026-01-05 03:19:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:36.149965 | orchestrator | 2026-01-05 03:19:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:36.149999 | orchestrator | 2026-01-05 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:39.197647 | orchestrator | 2026-01-05 03:19:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:39.200042 | orchestrator | 2026-01-05 03:19:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:39.200120 | orchestrator | 2026-01-05 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:42.248729 | orchestrator | 2026-01-05 03:19:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:42.252119 | orchestrator | 2026-01-05 03:19:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:42.252201 | orchestrator | 2026-01-05 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:45.302432 | orchestrator | 2026-01-05 03:19:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:45.303455 | orchestrator | 2026-01-05 03:19:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:45.303517 | orchestrator | 2026-01-05 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:48.351913 | orchestrator | 2026-01-05 03:19:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:48.354911 | orchestrator | 2026-01-05 03:19:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:48.355033 | orchestrator | 2026-01-05 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:51.401580 | orchestrator | 2026-01-05 03:19:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:51.402738 | orchestrator | 2026-01-05 03:19:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:51.402853 | orchestrator | 2026-01-05 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:54.452671 | orchestrator | 2026-01-05 03:19:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:54.454597 | orchestrator | 2026-01-05 03:19:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:54.454652 | orchestrator | 2026-01-05 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:19:57.505046 | orchestrator | 2026-01-05 03:19:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:19:57.506317 | orchestrator | 2026-01-05 03:19:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:19:57.506379 | orchestrator | 2026-01-05 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:00.567744 | orchestrator | 2026-01-05 03:20:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:00.569308 | orchestrator | 2026-01-05 03:20:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:00.569428 | orchestrator | 2026-01-05 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:03.617091 | orchestrator | 2026-01-05 03:20:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:03.618722 | orchestrator | 2026-01-05 03:20:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:03.618759 | orchestrator | 2026-01-05 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:06.669010 | orchestrator | 2026-01-05 03:20:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:06.670753 | orchestrator | 2026-01-05 03:20:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:06.670827 | orchestrator | 2026-01-05 03:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:09.716222 | orchestrator | 2026-01-05 03:20:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:09.717981 | orchestrator | 2026-01-05 03:20:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:09.718099 | orchestrator | 2026-01-05 03:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:12.767415 | orchestrator | 2026-01-05 03:20:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:12.769870 | orchestrator | 2026-01-05 03:20:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:12.769984 | orchestrator | 2026-01-05 03:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:15.821248 | orchestrator | 2026-01-05 03:20:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:15.822838 | orchestrator | 2026-01-05 03:20:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:15.822894 | orchestrator | 2026-01-05 03:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:18.876073 | orchestrator | 2026-01-05 03:20:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:18.877412 | orchestrator | 2026-01-05 03:20:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:18.877462 | orchestrator | 2026-01-05 03:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:21.932858 | orchestrator | 2026-01-05 03:20:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:21.934556 | orchestrator | 2026-01-05 03:20:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:21.934629 | orchestrator | 2026-01-05 03:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:24.983959 | orchestrator | 2026-01-05 03:20:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:24.985261 | orchestrator | 2026-01-05 03:20:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:24.985726 | orchestrator | 2026-01-05 03:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:28.038672 | orchestrator | 2026-01-05 03:20:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:28.038809 | orchestrator | 2026-01-05 03:20:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:28.038820 | orchestrator | 2026-01-05 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:31.090868 | orchestrator | 2026-01-05 03:20:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:31.091676 | orchestrator | 2026-01-05 03:20:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:31.091825 | orchestrator | 2026-01-05 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:34.136847 | orchestrator | 2026-01-05 03:20:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:34.139825 | orchestrator | 2026-01-05 03:20:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:34.139929 | orchestrator | 2026-01-05 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:37.187558 | orchestrator | 2026-01-05 03:20:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:37.189415 | orchestrator | 2026-01-05 03:20:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:37.189483 | orchestrator | 2026-01-05 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:40.236349 | orchestrator | 2026-01-05 03:20:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:40.237318 | orchestrator | 2026-01-05 03:20:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:40.237404 | orchestrator | 2026-01-05 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:43.289564 | orchestrator | 2026-01-05 03:20:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:43.291129 | orchestrator | 2026-01-05 03:20:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:43.291192 | orchestrator | 2026-01-05 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:46.334652 | orchestrator | 2026-01-05 03:20:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:46.335754 | orchestrator | 2026-01-05 03:20:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:46.335815 | orchestrator | 2026-01-05 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:49.391654 | orchestrator | 2026-01-05 03:20:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:49.392757 | orchestrator | 2026-01-05 03:20:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:49.392792 | orchestrator | 2026-01-05 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:52.443000 | orchestrator | 2026-01-05 03:20:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:52.444842 | orchestrator | 2026-01-05 03:20:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:52.444935 | orchestrator | 2026-01-05 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:55.495285 | orchestrator | 2026-01-05 03:20:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:55.496259 | orchestrator | 2026-01-05 03:20:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:55.496320 | orchestrator | 2026-01-05 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:20:58.542515 | orchestrator | 2026-01-05 03:20:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:20:58.544580 | orchestrator | 2026-01-05 03:20:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:20:58.544647 | orchestrator | 2026-01-05 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:01.593923 | orchestrator | 2026-01-05 03:21:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:01.595153 | orchestrator | 2026-01-05 03:21:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:01.595420 | orchestrator | 2026-01-05 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:04.649678 | orchestrator | 2026-01-05 03:21:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:04.651037 | orchestrator | 2026-01-05 03:21:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:04.651098 | orchestrator | 2026-01-05 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:07.698629 | orchestrator | 2026-01-05 03:21:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:07.701301 | orchestrator | 2026-01-05 03:21:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:07.701400 | orchestrator | 2026-01-05 03:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:10.744434 | orchestrator | 2026-01-05 03:21:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:10.745423 | orchestrator | 2026-01-05 03:21:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:10.745496 | orchestrator | 2026-01-05 03:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:13.793324 | orchestrator | 2026-01-05 03:21:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:13.793943 | orchestrator | 2026-01-05 03:21:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:13.793976 | orchestrator | 2026-01-05 03:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:16.836999 | orchestrator | 2026-01-05 03:21:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:16.838678 | orchestrator | 2026-01-05 03:21:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:16.838723 | orchestrator | 2026-01-05 03:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:19.890869 | orchestrator | 2026-01-05 03:21:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:19.893428 | orchestrator | 2026-01-05 03:21:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:19.893669 | orchestrator | 2026-01-05 03:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:22.938246 | orchestrator | 2026-01-05 03:21:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:22.938927 | orchestrator | 2026-01-05 03:21:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:22.938958 | orchestrator | 2026-01-05 03:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:25.988093 | orchestrator | 2026-01-05 03:21:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:25.990783 | orchestrator | 2026-01-05 03:21:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:25.990844 | orchestrator | 2026-01-05 03:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:29.043595 | orchestrator | 2026-01-05 03:21:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:29.045477 | orchestrator | 2026-01-05 03:21:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:29.045551 | orchestrator | 2026-01-05 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:32.093104 | orchestrator | 2026-01-05 03:21:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:32.093939 | orchestrator | 2026-01-05 03:21:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:32.093977 | orchestrator | 2026-01-05 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:35.144423 | orchestrator | 2026-01-05 03:21:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:35.145799 | orchestrator | 2026-01-05 03:21:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:35.145856 | orchestrator | 2026-01-05 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:38.196058 | orchestrator | 2026-01-05 03:21:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:38.196469 | orchestrator | 2026-01-05 03:21:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:38.196495 | orchestrator | 2026-01-05 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:41.245056 | orchestrator | 2026-01-05 03:21:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:41.246239 | orchestrator | 2026-01-05 03:21:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:41.246327 | orchestrator | 2026-01-05 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:44.293936 | orchestrator | 2026-01-05 03:21:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:44.295595 | orchestrator | 2026-01-05 03:21:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:44.295645 | orchestrator | 2026-01-05 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:47.341090 | orchestrator | 2026-01-05 03:21:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:47.344053 | orchestrator | 2026-01-05 03:21:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:47.344102 | orchestrator | 2026-01-05 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:50.387495 | orchestrator | 2026-01-05 03:21:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:50.390877 | orchestrator | 2026-01-05 03:21:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:50.390966 | orchestrator | 2026-01-05 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:53.440788 | orchestrator | 2026-01-05 03:21:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:53.441636 | orchestrator | 2026-01-05 03:21:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:53.441971 | orchestrator | 2026-01-05 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:56.493493 | orchestrator | 2026-01-05 03:21:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:56.495467 | orchestrator | 2026-01-05 03:21:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:56.495539 | orchestrator | 2026-01-05 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:21:59.539121 | orchestrator | 2026-01-05 03:21:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:21:59.541336 | orchestrator | 2026-01-05 03:21:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:21:59.541794 | orchestrator | 2026-01-05 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:02.582349 | orchestrator | 2026-01-05 03:22:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:02.583827 | orchestrator | 2026-01-05 03:22:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:02.583867 | orchestrator | 2026-01-05 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:05.632979 | orchestrator | 2026-01-05 03:22:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:05.634910 | orchestrator | 2026-01-05 03:22:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:05.634976 | orchestrator | 2026-01-05 03:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:08.682791 | orchestrator | 2026-01-05 03:22:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:08.684846 | orchestrator | 2026-01-05 03:22:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:08.684921 | orchestrator | 2026-01-05 03:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:11.730653 | orchestrator | 2026-01-05 03:22:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:11.732706 | orchestrator | 2026-01-05 03:22:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:11.732997 | orchestrator | 2026-01-05 03:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:14.790293 | orchestrator | 2026-01-05 03:22:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:14.791258 | orchestrator | 2026-01-05 03:22:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:14.791293 | orchestrator | 2026-01-05 03:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:17.838302 | orchestrator | 2026-01-05 03:22:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:17.839857 | orchestrator | 2026-01-05 03:22:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:17.839885 | orchestrator | 2026-01-05 03:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:20.894345 | orchestrator | 2026-01-05 03:22:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:20.895961 | orchestrator | 2026-01-05 03:22:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:20.896054 | orchestrator | 2026-01-05 03:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:23.945074 | orchestrator | 2026-01-05 03:22:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:23.948370 | orchestrator | 2026-01-05 03:22:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:23.948526 | orchestrator | 2026-01-05 03:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:26.994660 | orchestrator | 2026-01-05 03:22:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:26.996733 | orchestrator | 2026-01-05 03:22:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:26.996830 | orchestrator | 2026-01-05 03:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:30.053749 | orchestrator | 2026-01-05 03:22:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:30.056445 | orchestrator | 2026-01-05 03:22:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:30.056537 | orchestrator | 2026-01-05 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:33.093655 | orchestrator | 2026-01-05 03:22:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:33.094082 | orchestrator | 2026-01-05 03:22:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:33.094106 | orchestrator | 2026-01-05 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:36.144985 | orchestrator | 2026-01-05 03:22:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:36.147084 | orchestrator | 2026-01-05 03:22:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:36.147128 | orchestrator | 2026-01-05 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:39.193339 | orchestrator | 2026-01-05 03:22:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:39.195665 | orchestrator | 2026-01-05 03:22:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:39.195727 | orchestrator | 2026-01-05 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:42.235744 | orchestrator | 2026-01-05 03:22:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:42.238170 | orchestrator | 2026-01-05 03:22:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:42.238243 | orchestrator | 2026-01-05 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:45.282911 | orchestrator | 2026-01-05 03:22:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:45.284471 | orchestrator | 2026-01-05 03:22:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:45.284504 | orchestrator | 2026-01-05 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:48.335044 | orchestrator | 2026-01-05 03:22:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:48.336757 | orchestrator | 2026-01-05 03:22:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:48.336818 | orchestrator | 2026-01-05 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:51.380302 | orchestrator | 2026-01-05 03:22:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:51.382208 | orchestrator | 2026-01-05 03:22:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:51.382268 | orchestrator | 2026-01-05 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:54.435106 | orchestrator | 2026-01-05 03:22:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:54.436809 | orchestrator | 2026-01-05 03:22:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:54.436856 | orchestrator | 2026-01-05 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:22:57.485357 | orchestrator | 2026-01-05 03:22:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:22:57.486872 | orchestrator | 2026-01-05 03:22:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:22:57.486929 | orchestrator | 2026-01-05 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:00.538782 | orchestrator | 2026-01-05 03:23:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:00.540549 | orchestrator | 2026-01-05 03:23:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:00.540600 | orchestrator | 2026-01-05 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:03.591516 | orchestrator | 2026-01-05 03:23:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:03.591773 | orchestrator | 2026-01-05 03:23:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:03.591801 | orchestrator | 2026-01-05 03:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:06.643131 | orchestrator | 2026-01-05 03:23:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:06.644890 | orchestrator | 2026-01-05 03:23:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:06.645045 | orchestrator | 2026-01-05 03:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:09.686915 | orchestrator | 2026-01-05 03:23:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:09.689065 | orchestrator | 2026-01-05 03:23:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:09.689373 | orchestrator | 2026-01-05 03:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:12.730417 | orchestrator | 2026-01-05 03:23:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:12.731907 | orchestrator | 2026-01-05 03:23:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:12.731959 | orchestrator | 2026-01-05 03:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:15.778490 | orchestrator | 2026-01-05 03:23:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:15.779297 | orchestrator | 2026-01-05 03:23:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:15.779330 | orchestrator | 2026-01-05 03:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:18.826900 | orchestrator | 2026-01-05 03:23:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:18.829184 | orchestrator | 2026-01-05 03:23:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:18.829295 | orchestrator | 2026-01-05 03:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:21.884460 | orchestrator | 2026-01-05 03:23:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:21.885911 | orchestrator | 2026-01-05 03:23:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:21.885979 | orchestrator | 2026-01-05 03:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:24.935715 | orchestrator | 2026-01-05 03:23:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:24.936493 | orchestrator | 2026-01-05 03:23:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:24.936552 | orchestrator | 2026-01-05 03:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:27.984697 | orchestrator | 2026-01-05 03:23:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:27.986715 | orchestrator | 2026-01-05 03:23:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:27.986758 | orchestrator | 2026-01-05 03:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:31.041788 | orchestrator | 2026-01-05 03:23:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:31.044018 | orchestrator | 2026-01-05 03:23:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:31.044085 | orchestrator | 2026-01-05 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:34.091676 | orchestrator | 2026-01-05 03:23:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:34.093690 | orchestrator | 2026-01-05 03:23:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:34.093772 | orchestrator | 2026-01-05 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:37.142271 | orchestrator | 2026-01-05 03:23:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:37.145104 | orchestrator | 2026-01-05 03:23:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:37.145209 | orchestrator | 2026-01-05 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:40.184344 | orchestrator | 2026-01-05 03:23:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:40.185909 | orchestrator | 2026-01-05 03:23:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:40.185973 | orchestrator | 2026-01-05 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:43.234371 | orchestrator | 2026-01-05 03:23:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:43.235494 | orchestrator | 2026-01-05 03:23:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:43.235569 | orchestrator | 2026-01-05 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:46.279429 | orchestrator | 2026-01-05 03:23:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:46.280912 | orchestrator | 2026-01-05 03:23:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:46.280977 | orchestrator | 2026-01-05 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:49.336095 | orchestrator | 2026-01-05 03:23:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:49.339893 | orchestrator | 2026-01-05 03:23:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:49.339987 | orchestrator | 2026-01-05 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:52.392253 | orchestrator | 2026-01-05 03:23:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:52.392380 | orchestrator | 2026-01-05 03:23:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:52.392763 | orchestrator | 2026-01-05 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:55.446226 | orchestrator | 2026-01-05 03:23:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:55.448403 | orchestrator | 2026-01-05 03:23:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:55.448530 | orchestrator | 2026-01-05 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:23:58.497707 | orchestrator | 2026-01-05 03:23:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:23:58.500585 | orchestrator | 2026-01-05 03:23:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:23:58.500652 | orchestrator | 2026-01-05 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:01.558705 | orchestrator | 2026-01-05 03:24:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:01.559387 | orchestrator | 2026-01-05 03:24:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:01.559669 | orchestrator | 2026-01-05 03:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:04.610331 | orchestrator | 2026-01-05 03:24:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:04.611330 | orchestrator | 2026-01-05 03:24:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:04.611359 | orchestrator | 2026-01-05 03:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:07.665630 | orchestrator | 2026-01-05 03:24:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:07.666545 | orchestrator | 2026-01-05 03:24:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:07.666592 | orchestrator | 2026-01-05 03:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:10.719113 | orchestrator | 2026-01-05 03:24:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:10.722452 | orchestrator | 2026-01-05 03:24:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:10.722520 | orchestrator | 2026-01-05 03:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:13.770147 | orchestrator | 2026-01-05 03:24:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:13.771599 | orchestrator | 2026-01-05 03:24:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:13.771703 | orchestrator | 2026-01-05 03:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:16.822346 | orchestrator | 2026-01-05 03:24:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:16.823972 | orchestrator | 2026-01-05 03:24:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:16.824036 | orchestrator | 2026-01-05 03:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:19.869495 | orchestrator | 2026-01-05 03:24:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:19.870740 | orchestrator | 2026-01-05 03:24:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:19.870786 | orchestrator | 2026-01-05 03:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:22.925480 | orchestrator | 2026-01-05 03:24:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:22.927436 | orchestrator | 2026-01-05 03:24:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:22.927478 | orchestrator | 2026-01-05 03:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:25.975245 | orchestrator | 2026-01-05 03:24:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:25.976457 | orchestrator | 2026-01-05 03:24:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:25.976519 | orchestrator | 2026-01-05 03:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:29.035597 | orchestrator | 2026-01-05 03:24:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:29.038349 | orchestrator | 2026-01-05 03:24:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:29.038423 | orchestrator | 2026-01-05 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:32.081545 | orchestrator | 2026-01-05 03:24:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:32.082384 | orchestrator | 2026-01-05 03:24:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:32.082415 | orchestrator | 2026-01-05 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:35.135018 | orchestrator | 2026-01-05 03:24:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:35.137070 | orchestrator | 2026-01-05 03:24:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:35.137113 | orchestrator | 2026-01-05 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:38.183313 | orchestrator | 2026-01-05 03:24:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:38.184855 | orchestrator | 2026-01-05 03:24:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:38.184907 | orchestrator | 2026-01-05 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:41.232959 | orchestrator | 2026-01-05 03:24:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:41.233901 | orchestrator | 2026-01-05 03:24:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:41.233933 | orchestrator | 2026-01-05 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:44.284110 | orchestrator | 2026-01-05 03:24:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:44.284354 | orchestrator | 2026-01-05 03:24:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:44.284370 | orchestrator | 2026-01-05 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:47.335649 | orchestrator | 2026-01-05 03:24:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:47.336574 | orchestrator | 2026-01-05 03:24:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:47.336901 | orchestrator | 2026-01-05 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:50.390285 | orchestrator | 2026-01-05 03:24:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:50.391798 | orchestrator | 2026-01-05 03:24:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:50.391832 | orchestrator | 2026-01-05 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:53.448931 | orchestrator | 2026-01-05 03:24:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:53.450223 | orchestrator | 2026-01-05 03:24:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:53.450257 | orchestrator | 2026-01-05 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:56.494347 | orchestrator | 2026-01-05 03:24:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:56.495753 | orchestrator | 2026-01-05 03:24:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:56.495875 | orchestrator | 2026-01-05 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:24:59.548748 | orchestrator | 2026-01-05 03:24:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:24:59.550498 | orchestrator | 2026-01-05 03:24:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:24:59.550723 | orchestrator | 2026-01-05 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:02.598988 | orchestrator | 2026-01-05 03:25:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:02.600997 | orchestrator | 2026-01-05 03:25:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:02.601155 | orchestrator | 2026-01-05 03:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:05.648195 | orchestrator | 2026-01-05 03:25:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:05.650113 | orchestrator | 2026-01-05 03:25:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:05.650223 | orchestrator | 2026-01-05 03:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:08.709885 | orchestrator | 2026-01-05 03:25:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:08.711190 | orchestrator | 2026-01-05 03:25:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:08.711242 | orchestrator | 2026-01-05 03:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:11.755596 | orchestrator | 2026-01-05 03:25:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:11.757185 | orchestrator | 2026-01-05 03:25:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:11.757259 | orchestrator | 2026-01-05 03:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:14.809591 | orchestrator | 2026-01-05 03:25:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:14.811605 | orchestrator | 2026-01-05 03:25:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:14.811661 | orchestrator | 2026-01-05 03:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:17.860571 | orchestrator | 2026-01-05 03:25:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:17.862542 | orchestrator | 2026-01-05 03:25:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:17.862598 | orchestrator | 2026-01-05 03:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:20.908864 | orchestrator | 2026-01-05 03:25:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:20.911277 | orchestrator | 2026-01-05 03:25:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:20.911526 | orchestrator | 2026-01-05 03:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:23.962901 | orchestrator | 2026-01-05 03:25:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:23.964597 | orchestrator | 2026-01-05 03:25:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:23.964622 | orchestrator | 2026-01-05 03:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:27.013264 | orchestrator | 2026-01-05 03:25:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:27.015051 | orchestrator | 2026-01-05 03:25:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:27.015175 | orchestrator | 2026-01-05 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:30.056647 | orchestrator | 2026-01-05 03:25:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:30.058658 | orchestrator | 2026-01-05 03:25:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:30.058705 | orchestrator | 2026-01-05 03:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:33.105096 | orchestrator | 2026-01-05 03:25:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:33.106557 | orchestrator | 2026-01-05 03:25:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:33.106621 | orchestrator | 2026-01-05 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:36.154329 | orchestrator | 2026-01-05 03:25:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:36.156216 | orchestrator | 2026-01-05 03:25:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:36.156282 | orchestrator | 2026-01-05 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:39.198081 | orchestrator | 2026-01-05 03:25:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:39.200285 | orchestrator | 2026-01-05 03:25:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:39.200378 | orchestrator | 2026-01-05 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:42.250652 | orchestrator | 2026-01-05 03:25:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:42.252065 | orchestrator | 2026-01-05 03:25:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:42.252153 | orchestrator | 2026-01-05 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:45.301279 | orchestrator | 2026-01-05 03:25:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:45.301416 | orchestrator | 2026-01-05 03:25:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:45.301425 | orchestrator | 2026-01-05 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:48.355718 | orchestrator | 2026-01-05 03:25:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:48.358222 | orchestrator | 2026-01-05 03:25:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:48.358294 | orchestrator | 2026-01-05 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:51.407301 | orchestrator | 2026-01-05 03:25:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:51.409213 | orchestrator | 2026-01-05 03:25:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:51.409283 | orchestrator | 2026-01-05 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:54.458828 | orchestrator | 2026-01-05 03:25:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:54.459602 | orchestrator | 2026-01-05 03:25:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:54.459661 | orchestrator | 2026-01-05 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:25:57.513982 | orchestrator | 2026-01-05 03:25:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:25:57.516086 | orchestrator | 2026-01-05 03:25:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:25:57.516175 | orchestrator | 2026-01-05 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:00.562587 | orchestrator | 2026-01-05 03:26:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:00.565867 | orchestrator | 2026-01-05 03:26:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:00.565911 | orchestrator | 2026-01-05 03:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:03.614876 | orchestrator | 2026-01-05 03:26:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:03.617797 | orchestrator | 2026-01-05 03:26:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:03.617879 | orchestrator | 2026-01-05 03:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:06.664657 | orchestrator | 2026-01-05 03:26:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:06.666162 | orchestrator | 2026-01-05 03:26:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:06.666242 | orchestrator | 2026-01-05 03:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:09.714506 | orchestrator | 2026-01-05 03:26:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:09.716603 | orchestrator | 2026-01-05 03:26:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:09.716640 | orchestrator | 2026-01-05 03:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:12.762156 | orchestrator | 2026-01-05 03:26:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:12.763548 | orchestrator | 2026-01-05 03:26:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:12.763596 | orchestrator | 2026-01-05 03:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:15.806193 | orchestrator | 2026-01-05 03:26:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:15.807361 | orchestrator | 2026-01-05 03:26:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:15.807445 | orchestrator | 2026-01-05 03:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:18.857754 | orchestrator | 2026-01-05 03:26:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:18.859444 | orchestrator | 2026-01-05 03:26:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:18.859479 | orchestrator | 2026-01-05 03:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:21.914922 | orchestrator | 2026-01-05 03:26:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:21.917844 | orchestrator | 2026-01-05 03:26:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:21.917901 | orchestrator | 2026-01-05 03:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:24.962282 | orchestrator | 2026-01-05 03:26:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:24.963449 | orchestrator | 2026-01-05 03:26:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:24.963491 | orchestrator | 2026-01-05 03:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:28.014582 | orchestrator | 2026-01-05 03:26:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:28.015952 | orchestrator | 2026-01-05 03:26:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:28.016035 | orchestrator | 2026-01-05 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:31.062563 | orchestrator | 2026-01-05 03:26:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:31.063904 | orchestrator | 2026-01-05 03:26:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:31.064382 | orchestrator | 2026-01-05 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:34.109757 | orchestrator | 2026-01-05 03:26:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:34.110745 | orchestrator | 2026-01-05 03:26:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:34.110785 | orchestrator | 2026-01-05 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:37.155215 | orchestrator | 2026-01-05 03:26:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:37.155600 | orchestrator | 2026-01-05 03:26:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:37.155721 | orchestrator | 2026-01-05 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:40.199605 | orchestrator | 2026-01-05 03:26:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:40.200549 | orchestrator | 2026-01-05 03:26:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:40.200647 | orchestrator | 2026-01-05 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:43.257159 | orchestrator | 2026-01-05 03:26:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:43.259090 | orchestrator | 2026-01-05 03:26:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:43.259134 | orchestrator | 2026-01-05 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:46.294537 | orchestrator | 2026-01-05 03:26:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:46.296138 | orchestrator | 2026-01-05 03:26:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:46.296287 | orchestrator | 2026-01-05 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:49.343913 | orchestrator | 2026-01-05 03:26:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:49.346264 | orchestrator | 2026-01-05 03:26:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:49.346298 | orchestrator | 2026-01-05 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:52.393077 | orchestrator | 2026-01-05 03:26:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:52.394781 | orchestrator | 2026-01-05 03:26:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:52.394850 | orchestrator | 2026-01-05 03:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:55.444916 | orchestrator | 2026-01-05 03:26:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:55.446471 | orchestrator | 2026-01-05 03:26:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:55.446542 | orchestrator | 2026-01-05 03:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:26:58.500379 | orchestrator | 2026-01-05 03:26:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:26:58.502395 | orchestrator | 2026-01-05 03:26:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:26:58.502581 | orchestrator | 2026-01-05 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:01.546568 | orchestrator | 2026-01-05 03:27:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:01.547534 | orchestrator | 2026-01-05 03:27:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:01.547587 | orchestrator | 2026-01-05 03:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:04.594304 | orchestrator | 2026-01-05 03:27:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:04.595517 | orchestrator | 2026-01-05 03:27:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:04.595653 | orchestrator | 2026-01-05 03:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:07.641760 | orchestrator | 2026-01-05 03:27:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:07.645156 | orchestrator | 2026-01-05 03:27:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:07.645248 | orchestrator | 2026-01-05 03:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:10.683707 | orchestrator | 2026-01-05 03:27:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:10.685882 | orchestrator | 2026-01-05 03:27:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:10.685918 | orchestrator | 2026-01-05 03:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:13.735206 | orchestrator | 2026-01-05 03:27:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:13.737581 | orchestrator | 2026-01-05 03:27:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:13.737736 | orchestrator | 2026-01-05 03:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:16.783710 | orchestrator | 2026-01-05 03:27:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:16.785689 | orchestrator | 2026-01-05 03:27:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:16.785841 | orchestrator | 2026-01-05 03:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:19.832025 | orchestrator | 2026-01-05 03:27:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:19.833620 | orchestrator | 2026-01-05 03:27:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:19.833711 | orchestrator | 2026-01-05 03:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:22.884664 | orchestrator | 2026-01-05 03:27:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:22.887298 | orchestrator | 2026-01-05 03:27:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:22.887526 | orchestrator | 2026-01-05 03:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:25.939970 | orchestrator | 2026-01-05 03:27:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:25.941372 | orchestrator | 2026-01-05 03:27:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:25.941507 | orchestrator | 2026-01-05 03:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:28.997916 | orchestrator | 2026-01-05 03:27:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:29.016029 | orchestrator | 2026-01-05 03:27:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:29.016198 | orchestrator | 2026-01-05 03:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:32.051253 | orchestrator | 2026-01-05 03:27:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:32.051561 | orchestrator | 2026-01-05 03:27:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:32.051596 | orchestrator | 2026-01-05 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:35.094792 | orchestrator | 2026-01-05 03:27:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:35.096297 | orchestrator | 2026-01-05 03:27:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:35.096358 | orchestrator | 2026-01-05 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:38.135812 | orchestrator | 2026-01-05 03:27:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:38.135924 | orchestrator | 2026-01-05 03:27:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:38.135940 | orchestrator | 2026-01-05 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:41.179723 | orchestrator | 2026-01-05 03:27:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:41.183274 | orchestrator | 2026-01-05 03:27:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:41.183350 | orchestrator | 2026-01-05 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:44.222930 | orchestrator | 2026-01-05 03:27:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:44.223207 | orchestrator | 2026-01-05 03:27:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:44.224017 | orchestrator | 2026-01-05 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:47.279209 | orchestrator | 2026-01-05 03:27:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:47.280716 | orchestrator | 2026-01-05 03:27:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:47.280997 | orchestrator | 2026-01-05 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:50.331763 | orchestrator | 2026-01-05 03:27:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:50.333987 | orchestrator | 2026-01-05 03:27:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:50.334138 | orchestrator | 2026-01-05 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:53.382827 | orchestrator | 2026-01-05 03:27:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:53.384231 | orchestrator | 2026-01-05 03:27:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:53.384255 | orchestrator | 2026-01-05 03:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:56.430787 | orchestrator | 2026-01-05 03:27:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:56.432149 | orchestrator | 2026-01-05 03:27:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:56.432206 | orchestrator | 2026-01-05 03:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:27:59.480526 | orchestrator | 2026-01-05 03:27:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:27:59.482750 | orchestrator | 2026-01-05 03:27:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:27:59.482896 | orchestrator | 2026-01-05 03:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:02.530001 | orchestrator | 2026-01-05 03:28:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:02.532782 | orchestrator | 2026-01-05 03:28:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:02.532836 | orchestrator | 2026-01-05 03:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:05.582962 | orchestrator | 2026-01-05 03:28:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:05.584983 | orchestrator | 2026-01-05 03:28:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:05.585009 | orchestrator | 2026-01-05 03:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:08.632782 | orchestrator | 2026-01-05 03:28:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:08.962395 | orchestrator | 2026-01-05 03:28:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:08.962464 | orchestrator | 2026-01-05 03:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:11.683106 | orchestrator | 2026-01-05 03:28:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:11.685147 | orchestrator | 2026-01-05 03:28:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:11.685329 | orchestrator | 2026-01-05 03:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:14.729519 | orchestrator | 2026-01-05 03:28:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:14.730808 | orchestrator | 2026-01-05 03:28:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:14.730857 | orchestrator | 2026-01-05 03:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:17.780242 | orchestrator | 2026-01-05 03:28:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:17.781538 | orchestrator | 2026-01-05 03:28:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:17.781656 | orchestrator | 2026-01-05 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:20.830534 | orchestrator | 2026-01-05 03:28:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:20.832103 | orchestrator | 2026-01-05 03:28:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:20.832145 | orchestrator | 2026-01-05 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:23.874311 | orchestrator | 2026-01-05 03:28:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:23.875345 | orchestrator | 2026-01-05 03:28:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:23.875424 | orchestrator | 2026-01-05 03:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:26.926205 | orchestrator | 2026-01-05 03:28:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:26.927548 | orchestrator | 2026-01-05 03:28:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:26.927606 | orchestrator | 2026-01-05 03:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:29.972503 | orchestrator | 2026-01-05 03:28:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:29.973917 | orchestrator | 2026-01-05 03:28:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:29.974169 | orchestrator | 2026-01-05 03:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:33.025503 | orchestrator | 2026-01-05 03:28:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:33.027330 | orchestrator | 2026-01-05 03:28:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:33.027426 | orchestrator | 2026-01-05 03:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:36.073477 | orchestrator | 2026-01-05 03:28:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:36.075747 | orchestrator | 2026-01-05 03:28:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:36.075806 | orchestrator | 2026-01-05 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:39.112991 | orchestrator | 2026-01-05 03:28:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:39.114160 | orchestrator | 2026-01-05 03:28:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:39.114215 | orchestrator | 2026-01-05 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:42.161291 | orchestrator | 2026-01-05 03:28:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:42.163513 | orchestrator | 2026-01-05 03:28:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:42.163573 | orchestrator | 2026-01-05 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:45.219419 | orchestrator | 2026-01-05 03:28:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:45.222394 | orchestrator | 2026-01-05 03:28:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:45.222439 | orchestrator | 2026-01-05 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:48.277276 | orchestrator | 2026-01-05 03:28:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:48.278899 | orchestrator | 2026-01-05 03:28:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:48.278981 | orchestrator | 2026-01-05 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:51.325264 | orchestrator | 2026-01-05 03:28:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:51.326723 | orchestrator | 2026-01-05 03:28:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:51.326765 | orchestrator | 2026-01-05 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:54.374225 | orchestrator | 2026-01-05 03:28:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:54.376526 | orchestrator | 2026-01-05 03:28:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:54.376617 | orchestrator | 2026-01-05 03:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:28:57.425953 | orchestrator | 2026-01-05 03:28:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:28:57.427631 | orchestrator | 2026-01-05 03:28:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:28:57.427681 | orchestrator | 2026-01-05 03:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:00.481751 | orchestrator | 2026-01-05 03:29:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:00.483353 | orchestrator | 2026-01-05 03:29:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:00.483419 | orchestrator | 2026-01-05 03:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:03.534641 | orchestrator | 2026-01-05 03:29:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:03.535343 | orchestrator | 2026-01-05 03:29:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:03.535380 | orchestrator | 2026-01-05 03:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:06.587561 | orchestrator | 2026-01-05 03:29:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:06.589731 | orchestrator | 2026-01-05 03:29:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:06.589914 | orchestrator | 2026-01-05 03:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:09.643460 | orchestrator | 2026-01-05 03:29:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:09.645263 | orchestrator | 2026-01-05 03:29:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:09.645419 | orchestrator | 2026-01-05 03:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:12.695471 | orchestrator | 2026-01-05 03:29:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:12.697565 | orchestrator | 2026-01-05 03:29:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:12.697632 | orchestrator | 2026-01-05 03:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:15.747089 | orchestrator | 2026-01-05 03:29:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:15.748260 | orchestrator | 2026-01-05 03:29:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:15.748307 | orchestrator | 2026-01-05 03:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:18.801898 | orchestrator | 2026-01-05 03:29:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:18.804510 | orchestrator | 2026-01-05 03:29:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:18.804642 | orchestrator | 2026-01-05 03:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:21.857312 | orchestrator | 2026-01-05 03:29:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:21.858412 | orchestrator | 2026-01-05 03:29:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:21.858446 | orchestrator | 2026-01-05 03:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:24.910960 | orchestrator | 2026-01-05 03:29:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:24.911607 | orchestrator | 2026-01-05 03:29:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:24.911641 | orchestrator | 2026-01-05 03:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:27.961842 | orchestrator | 2026-01-05 03:29:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:27.963612 | orchestrator | 2026-01-05 03:29:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:27.963676 | orchestrator | 2026-01-05 03:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:31.014956 | orchestrator | 2026-01-05 03:29:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:31.018488 | orchestrator | 2026-01-05 03:29:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:31.018593 | orchestrator | 2026-01-05 03:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:34.062938 | orchestrator | 2026-01-05 03:29:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:34.064529 | orchestrator | 2026-01-05 03:29:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:34.064588 | orchestrator | 2026-01-05 03:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:37.113332 | orchestrator | 2026-01-05 03:29:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:37.115068 | orchestrator | 2026-01-05 03:29:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:37.115103 | orchestrator | 2026-01-05 03:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:40.154935 | orchestrator | 2026-01-05 03:29:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:40.156609 | orchestrator | 2026-01-05 03:29:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:40.156716 | orchestrator | 2026-01-05 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:43.204471 | orchestrator | 2026-01-05 03:29:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:43.205416 | orchestrator | 2026-01-05 03:29:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:43.205460 | orchestrator | 2026-01-05 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:46.253877 | orchestrator | 2026-01-05 03:29:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:46.255093 | orchestrator | 2026-01-05 03:29:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:46.255162 | orchestrator | 2026-01-05 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:49.310816 | orchestrator | 2026-01-05 03:29:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:49.312331 | orchestrator | 2026-01-05 03:29:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:49.312382 | orchestrator | 2026-01-05 03:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:52.353867 | orchestrator | 2026-01-05 03:29:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:52.355361 | orchestrator | 2026-01-05 03:29:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:52.355411 | orchestrator | 2026-01-05 03:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:55.403494 | orchestrator | 2026-01-05 03:29:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:55.404330 | orchestrator | 2026-01-05 03:29:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:55.404372 | orchestrator | 2026-01-05 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:29:58.455741 | orchestrator | 2026-01-05 03:29:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:29:58.458917 | orchestrator | 2026-01-05 03:29:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:29:58.459085 | orchestrator | 2026-01-05 03:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:01.508553 | orchestrator | 2026-01-05 03:30:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:01.509845 | orchestrator | 2026-01-05 03:30:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:01.509885 | orchestrator | 2026-01-05 03:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:04.564149 | orchestrator | 2026-01-05 03:30:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:04.566570 | orchestrator | 2026-01-05 03:30:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:04.566622 | orchestrator | 2026-01-05 03:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:07.616448 | orchestrator | 2026-01-05 03:30:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:07.617817 | orchestrator | 2026-01-05 03:30:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:07.617905 | orchestrator | 2026-01-05 03:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:10.671463 | orchestrator | 2026-01-05 03:30:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:10.673165 | orchestrator | 2026-01-05 03:30:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:10.673367 | orchestrator | 2026-01-05 03:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:13.721592 | orchestrator | 2026-01-05 03:30:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:13.724470 | orchestrator | 2026-01-05 03:30:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:13.724562 | orchestrator | 2026-01-05 03:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:16.775837 | orchestrator | 2026-01-05 03:30:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:16.778389 | orchestrator | 2026-01-05 03:30:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:16.778510 | orchestrator | 2026-01-05 03:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:19.826514 | orchestrator | 2026-01-05 03:30:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:19.828046 | orchestrator | 2026-01-05 03:30:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:19.828122 | orchestrator | 2026-01-05 03:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:22.876299 | orchestrator | 2026-01-05 03:30:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:22.877298 | orchestrator | 2026-01-05 03:30:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:22.877348 | orchestrator | 2026-01-05 03:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:25.924596 | orchestrator | 2026-01-05 03:30:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:25.925389 | orchestrator | 2026-01-05 03:30:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:25.925442 | orchestrator | 2026-01-05 03:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:28.970000 | orchestrator | 2026-01-05 03:30:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:28.971343 | orchestrator | 2026-01-05 03:30:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:28.971394 | orchestrator | 2026-01-05 03:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:32.018265 | orchestrator | 2026-01-05 03:30:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:32.022711 | orchestrator | 2026-01-05 03:30:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:32.022803 | orchestrator | 2026-01-05 03:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:35.073212 | orchestrator | 2026-01-05 03:30:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:35.074604 | orchestrator | 2026-01-05 03:30:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:35.074658 | orchestrator | 2026-01-05 03:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:38.129764 | orchestrator | 2026-01-05 03:30:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:38.129933 | orchestrator | 2026-01-05 03:30:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:38.129982 | orchestrator | 2026-01-05 03:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:41.179450 | orchestrator | 2026-01-05 03:30:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:41.180309 | orchestrator | 2026-01-05 03:30:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:41.180427 | orchestrator | 2026-01-05 03:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:44.229342 | orchestrator | 2026-01-05 03:30:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:44.231069 | orchestrator | 2026-01-05 03:30:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:44.231113 | orchestrator | 2026-01-05 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:47.277877 | orchestrator | 2026-01-05 03:30:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:47.279259 | orchestrator | 2026-01-05 03:30:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:47.279290 | orchestrator | 2026-01-05 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:50.325669 | orchestrator | 2026-01-05 03:30:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:50.328221 | orchestrator | 2026-01-05 03:30:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:50.328318 | orchestrator | 2026-01-05 03:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:53.380724 | orchestrator | 2026-01-05 03:30:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:53.382387 | orchestrator | 2026-01-05 03:30:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:53.382500 | orchestrator | 2026-01-05 03:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:56.430232 | orchestrator | 2026-01-05 03:30:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:56.431125 | orchestrator | 2026-01-05 03:30:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:56.431143 | orchestrator | 2026-01-05 03:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:30:59.481166 | orchestrator | 2026-01-05 03:30:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:30:59.482156 | orchestrator | 2026-01-05 03:30:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:30:59.482206 | orchestrator | 2026-01-05 03:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:02.531587 | orchestrator | 2026-01-05 03:31:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:02.532191 | orchestrator | 2026-01-05 03:31:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:02.532216 | orchestrator | 2026-01-05 03:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:05.581770 | orchestrator | 2026-01-05 03:31:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:05.583653 | orchestrator | 2026-01-05 03:31:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:05.583731 | orchestrator | 2026-01-05 03:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:08.640137 | orchestrator | 2026-01-05 03:31:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:08.641038 | orchestrator | 2026-01-05 03:31:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:08.641186 | orchestrator | 2026-01-05 03:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:11.685257 | orchestrator | 2026-01-05 03:31:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:11.686168 | orchestrator | 2026-01-05 03:31:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:11.686259 | orchestrator | 2026-01-05 03:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:14.737395 | orchestrator | 2026-01-05 03:31:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:14.740557 | orchestrator | 2026-01-05 03:31:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:14.740653 | orchestrator | 2026-01-05 03:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:17.792746 | orchestrator | 2026-01-05 03:31:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:17.794558 | orchestrator | 2026-01-05 03:31:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:17.794618 | orchestrator | 2026-01-05 03:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:20.835569 | orchestrator | 2026-01-05 03:31:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:20.836452 | orchestrator | 2026-01-05 03:31:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:20.836483 | orchestrator | 2026-01-05 03:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:23.888789 | orchestrator | 2026-01-05 03:31:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:23.891534 | orchestrator | 2026-01-05 03:31:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:23.891776 | orchestrator | 2026-01-05 03:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:26.936236 | orchestrator | 2026-01-05 03:31:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:26.938291 | orchestrator | 2026-01-05 03:31:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:26.938461 | orchestrator | 2026-01-05 03:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:29.988640 | orchestrator | 2026-01-05 03:31:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:29.989205 | orchestrator | 2026-01-05 03:31:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:29.989242 | orchestrator | 2026-01-05 03:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:33.039341 | orchestrator | 2026-01-05 03:31:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:33.041188 | orchestrator | 2026-01-05 03:31:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:33.041263 | orchestrator | 2026-01-05 03:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:36.085181 | orchestrator | 2026-01-05 03:31:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:36.086620 | orchestrator | 2026-01-05 03:31:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:36.086694 | orchestrator | 2026-01-05 03:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:39.130764 | orchestrator | 2026-01-05 03:31:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:39.131974 | orchestrator | 2026-01-05 03:31:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:39.132022 | orchestrator | 2026-01-05 03:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:42.178489 | orchestrator | 2026-01-05 03:31:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:42.179417 | orchestrator | 2026-01-05 03:31:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:42.179481 | orchestrator | 2026-01-05 03:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:45.226404 | orchestrator | 2026-01-05 03:31:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:45.228156 | orchestrator | 2026-01-05 03:31:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:45.228191 | orchestrator | 2026-01-05 03:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:48.274671 | orchestrator | 2026-01-05 03:31:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:48.276905 | orchestrator | 2026-01-05 03:31:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:48.277042 | orchestrator | 2026-01-05 03:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:51.328769 | orchestrator | 2026-01-05 03:31:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:51.330121 | orchestrator | 2026-01-05 03:31:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:51.330860 | orchestrator | 2026-01-05 03:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:54.384545 | orchestrator | 2026-01-05 03:31:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:54.386272 | orchestrator | 2026-01-05 03:31:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:54.386327 | orchestrator | 2026-01-05 03:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:31:57.437472 | orchestrator | 2026-01-05 03:31:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:31:57.439487 | orchestrator | 2026-01-05 03:31:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:31:57.439575 | orchestrator | 2026-01-05 03:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:00.495215 | orchestrator | 2026-01-05 03:32:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:00.495935 | orchestrator | 2026-01-05 03:32:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:00.495961 | orchestrator | 2026-01-05 03:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:03.547404 | orchestrator | 2026-01-05 03:32:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:03.549170 | orchestrator | 2026-01-05 03:32:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:03.549213 | orchestrator | 2026-01-05 03:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:06.596634 | orchestrator | 2026-01-05 03:32:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:06.599598 | orchestrator | 2026-01-05 03:32:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:06.599658 | orchestrator | 2026-01-05 03:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:09.656205 | orchestrator | 2026-01-05 03:32:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:09.658482 | orchestrator | 2026-01-05 03:32:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:09.658539 | orchestrator | 2026-01-05 03:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:12.705841 | orchestrator | 2026-01-05 03:32:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:12.707216 | orchestrator | 2026-01-05 03:32:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:12.707286 | orchestrator | 2026-01-05 03:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:15.762808 | orchestrator | 2026-01-05 03:32:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:15.763737 | orchestrator | 2026-01-05 03:32:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:15.763808 | orchestrator | 2026-01-05 03:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:18.814665 | orchestrator | 2026-01-05 03:32:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:18.816776 | orchestrator | 2026-01-05 03:32:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:18.816859 | orchestrator | 2026-01-05 03:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:21.864784 | orchestrator | 2026-01-05 03:32:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:21.867315 | orchestrator | 2026-01-05 03:32:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:21.867377 | orchestrator | 2026-01-05 03:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:24.923120 | orchestrator | 2026-01-05 03:32:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:24.925626 | orchestrator | 2026-01-05 03:32:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:24.925712 | orchestrator | 2026-01-05 03:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:27.974510 | orchestrator | 2026-01-05 03:32:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:27.976176 | orchestrator | 2026-01-05 03:32:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:27.976231 | orchestrator | 2026-01-05 03:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:31.024844 | orchestrator | 2026-01-05 03:32:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:31.026802 | orchestrator | 2026-01-05 03:32:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:31.026848 | orchestrator | 2026-01-05 03:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:34.069141 | orchestrator | 2026-01-05 03:32:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:34.071314 | orchestrator | 2026-01-05 03:32:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:34.071415 | orchestrator | 2026-01-05 03:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:37.123644 | orchestrator | 2026-01-05 03:32:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:37.124850 | orchestrator | 2026-01-05 03:32:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:37.124956 | orchestrator | 2026-01-05 03:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:40.169318 | orchestrator | 2026-01-05 03:32:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:40.191475 | orchestrator | 2026-01-05 03:32:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:40.191540 | orchestrator | 2026-01-05 03:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:43.218324 | orchestrator | 2026-01-05 03:32:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:43.220280 | orchestrator | 2026-01-05 03:32:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:43.220358 | orchestrator | 2026-01-05 03:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:46.267159 | orchestrator | 2026-01-05 03:32:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:46.268200 | orchestrator | 2026-01-05 03:32:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:46.268236 | orchestrator | 2026-01-05 03:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:49.316445 | orchestrator | 2026-01-05 03:32:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:49.317125 | orchestrator | 2026-01-05 03:32:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:49.317305 | orchestrator | 2026-01-05 03:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:52.357555 | orchestrator | 2026-01-05 03:32:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:52.358815 | orchestrator | 2026-01-05 03:32:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:52.358883 | orchestrator | 2026-01-05 03:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:55.410964 | orchestrator | 2026-01-05 03:32:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:55.413473 | orchestrator | 2026-01-05 03:32:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:55.413543 | orchestrator | 2026-01-05 03:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:32:58.462921 | orchestrator | 2026-01-05 03:32:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:32:58.465236 | orchestrator | 2026-01-05 03:32:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:32:58.465288 | orchestrator | 2026-01-05 03:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:01.519606 | orchestrator | 2026-01-05 03:33:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:01.522354 | orchestrator | 2026-01-05 03:33:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:01.522423 | orchestrator | 2026-01-05 03:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:04.570836 | orchestrator | 2026-01-05 03:33:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:04.572131 | orchestrator | 2026-01-05 03:33:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:04.572183 | orchestrator | 2026-01-05 03:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:07.623501 | orchestrator | 2026-01-05 03:33:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:07.626134 | orchestrator | 2026-01-05 03:33:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:07.626202 | orchestrator | 2026-01-05 03:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:10.676941 | orchestrator | 2026-01-05 03:33:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:10.678168 | orchestrator | 2026-01-05 03:33:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:10.678200 | orchestrator | 2026-01-05 03:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:13.724453 | orchestrator | 2026-01-05 03:33:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:13.724684 | orchestrator | 2026-01-05 03:33:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:13.724716 | orchestrator | 2026-01-05 03:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:16.772048 | orchestrator | 2026-01-05 03:33:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:16.774198 | orchestrator | 2026-01-05 03:33:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:16.774245 | orchestrator | 2026-01-05 03:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:19.815068 | orchestrator | 2026-01-05 03:33:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:19.817288 | orchestrator | 2026-01-05 03:33:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:19.817389 | orchestrator | 2026-01-05 03:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:22.862469 | orchestrator | 2026-01-05 03:33:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:22.864857 | orchestrator | 2026-01-05 03:33:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:22.864991 | orchestrator | 2026-01-05 03:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:25.916758 | orchestrator | 2026-01-05 03:33:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:25.918742 | orchestrator | 2026-01-05 03:33:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:25.918786 | orchestrator | 2026-01-05 03:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:28.969387 | orchestrator | 2026-01-05 03:33:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:28.970423 | orchestrator | 2026-01-05 03:33:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:28.970467 | orchestrator | 2026-01-05 03:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:32.022397 | orchestrator | 2026-01-05 03:33:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:32.023843 | orchestrator | 2026-01-05 03:33:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:32.023937 | orchestrator | 2026-01-05 03:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:35.072284 | orchestrator | 2026-01-05 03:33:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:35.073796 | orchestrator | 2026-01-05 03:33:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:35.073872 | orchestrator | 2026-01-05 03:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:38.121050 | orchestrator | 2026-01-05 03:33:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:38.122162 | orchestrator | 2026-01-05 03:33:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:38.122262 | orchestrator | 2026-01-05 03:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:41.168520 | orchestrator | 2026-01-05 03:33:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:41.171738 | orchestrator | 2026-01-05 03:33:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:41.172205 | orchestrator | 2026-01-05 03:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:44.214173 | orchestrator | 2026-01-05 03:33:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:44.214635 | orchestrator | 2026-01-05 03:33:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:44.214918 | orchestrator | 2026-01-05 03:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:47.264537 | orchestrator | 2026-01-05 03:33:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:47.266264 | orchestrator | 2026-01-05 03:33:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:47.266304 | orchestrator | 2026-01-05 03:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:50.316746 | orchestrator | 2026-01-05 03:33:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:50.318930 | orchestrator | 2026-01-05 03:33:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:50.318999 | orchestrator | 2026-01-05 03:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:53.372866 | orchestrator | 2026-01-05 03:33:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:53.374696 | orchestrator | 2026-01-05 03:33:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:53.374810 | orchestrator | 2026-01-05 03:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:56.431749 | orchestrator | 2026-01-05 03:33:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:56.434235 | orchestrator | 2026-01-05 03:33:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:56.434275 | orchestrator | 2026-01-05 03:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:33:59.482300 | orchestrator | 2026-01-05 03:33:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:33:59.484570 | orchestrator | 2026-01-05 03:33:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:33:59.484628 | orchestrator | 2026-01-05 03:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:02.525194 | orchestrator | 2026-01-05 03:34:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:02.526455 | orchestrator | 2026-01-05 03:34:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:02.526526 | orchestrator | 2026-01-05 03:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:05.569855 | orchestrator | 2026-01-05 03:34:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:05.570720 | orchestrator | 2026-01-05 03:34:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:05.570739 | orchestrator | 2026-01-05 03:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:08.620425 | orchestrator | 2026-01-05 03:34:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:08.622238 | orchestrator | 2026-01-05 03:34:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:08.622291 | orchestrator | 2026-01-05 03:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:11.675237 | orchestrator | 2026-01-05 03:34:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:11.677880 | orchestrator | 2026-01-05 03:34:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:11.677974 | orchestrator | 2026-01-05 03:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:14.727781 | orchestrator | 2026-01-05 03:34:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:14.728264 | orchestrator | 2026-01-05 03:34:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:14.728300 | orchestrator | 2026-01-05 03:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:17.782806 | orchestrator | 2026-01-05 03:34:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:17.785165 | orchestrator | 2026-01-05 03:34:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:17.785226 | orchestrator | 2026-01-05 03:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:20.833109 | orchestrator | 2026-01-05 03:34:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:20.834254 | orchestrator | 2026-01-05 03:34:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:20.834309 | orchestrator | 2026-01-05 03:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:23.884334 | orchestrator | 2026-01-05 03:34:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:23.885452 | orchestrator | 2026-01-05 03:34:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:23.885546 | orchestrator | 2026-01-05 03:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:26.940664 | orchestrator | 2026-01-05 03:34:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:26.942608 | orchestrator | 2026-01-05 03:34:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:26.942644 | orchestrator | 2026-01-05 03:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:29.985807 | orchestrator | 2026-01-05 03:34:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:29.987892 | orchestrator | 2026-01-05 03:34:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:29.988070 | orchestrator | 2026-01-05 03:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:33.035582 | orchestrator | 2026-01-05 03:34:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:33.036432 | orchestrator | 2026-01-05 03:34:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:33.036511 | orchestrator | 2026-01-05 03:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:36.079488 | orchestrator | 2026-01-05 03:34:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:36.083022 | orchestrator | 2026-01-05 03:34:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:36.083081 | orchestrator | 2026-01-05 03:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:39.121255 | orchestrator | 2026-01-05 03:34:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:39.122458 | orchestrator | 2026-01-05 03:34:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:39.122741 | orchestrator | 2026-01-05 03:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:42.173524 | orchestrator | 2026-01-05 03:34:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:42.176379 | orchestrator | 2026-01-05 03:34:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:42.176629 | orchestrator | 2026-01-05 03:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:45.234546 | orchestrator | 2026-01-05 03:34:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:45.234887 | orchestrator | 2026-01-05 03:34:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:45.235057 | orchestrator | 2026-01-05 03:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:48.279337 | orchestrator | 2026-01-05 03:34:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:48.280481 | orchestrator | 2026-01-05 03:34:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:48.280532 | orchestrator | 2026-01-05 03:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:51.323899 | orchestrator | 2026-01-05 03:34:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:51.325514 | orchestrator | 2026-01-05 03:34:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:51.325563 | orchestrator | 2026-01-05 03:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:54.372317 | orchestrator | 2026-01-05 03:34:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:54.373989 | orchestrator | 2026-01-05 03:34:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:54.374177 | orchestrator | 2026-01-05 03:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:34:57.424783 | orchestrator | 2026-01-05 03:34:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:34:57.427391 | orchestrator | 2026-01-05 03:34:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:34:57.427684 | orchestrator | 2026-01-05 03:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:00.480749 | orchestrator | 2026-01-05 03:35:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:00.482042 | orchestrator | 2026-01-05 03:35:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:00.482133 | orchestrator | 2026-01-05 03:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:03.531558 | orchestrator | 2026-01-05 03:35:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:03.533655 | orchestrator | 2026-01-05 03:35:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:03.533729 | orchestrator | 2026-01-05 03:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:06.586262 | orchestrator | 2026-01-05 03:35:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:06.587649 | orchestrator | 2026-01-05 03:35:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:06.587694 | orchestrator | 2026-01-05 03:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:09.641515 | orchestrator | 2026-01-05 03:35:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:09.644437 | orchestrator | 2026-01-05 03:35:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:09.644542 | orchestrator | 2026-01-05 03:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:12.698781 | orchestrator | 2026-01-05 03:35:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:12.702396 | orchestrator | 2026-01-05 03:35:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:12.702468 | orchestrator | 2026-01-05 03:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:15.744448 | orchestrator | 2026-01-05 03:35:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:15.745971 | orchestrator | 2026-01-05 03:35:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:15.746095 | orchestrator | 2026-01-05 03:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:18.793092 | orchestrator | 2026-01-05 03:35:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:18.793425 | orchestrator | 2026-01-05 03:35:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:18.793458 | orchestrator | 2026-01-05 03:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:21.836011 | orchestrator | 2026-01-05 03:35:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:21.837709 | orchestrator | 2026-01-05 03:35:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:21.837792 | orchestrator | 2026-01-05 03:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:24.885120 | orchestrator | 2026-01-05 03:35:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:24.886765 | orchestrator | 2026-01-05 03:35:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:24.886884 | orchestrator | 2026-01-05 03:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:27.940125 | orchestrator | 2026-01-05 03:35:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:27.941803 | orchestrator | 2026-01-05 03:35:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:27.941854 | orchestrator | 2026-01-05 03:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:30.992733 | orchestrator | 2026-01-05 03:35:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:30.994423 | orchestrator | 2026-01-05 03:35:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:30.994528 | orchestrator | 2026-01-05 03:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:34.044574 | orchestrator | 2026-01-05 03:35:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:34.045742 | orchestrator | 2026-01-05 03:35:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:34.045803 | orchestrator | 2026-01-05 03:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:37.093486 | orchestrator | 2026-01-05 03:35:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:37.095358 | orchestrator | 2026-01-05 03:35:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:37.095422 | orchestrator | 2026-01-05 03:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:40.132979 | orchestrator | 2026-01-05 03:35:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:40.134434 | orchestrator | 2026-01-05 03:35:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:40.134480 | orchestrator | 2026-01-05 03:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:43.182284 | orchestrator | 2026-01-05 03:35:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:43.183512 | orchestrator | 2026-01-05 03:35:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:43.183754 | orchestrator | 2026-01-05 03:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:46.236858 | orchestrator | 2026-01-05 03:35:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:46.238387 | orchestrator | 2026-01-05 03:35:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:46.238476 | orchestrator | 2026-01-05 03:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:49.285891 | orchestrator | 2026-01-05 03:35:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:49.287112 | orchestrator | 2026-01-05 03:35:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:49.287173 | orchestrator | 2026-01-05 03:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:52.337640 | orchestrator | 2026-01-05 03:35:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:52.338832 | orchestrator | 2026-01-05 03:35:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:52.338871 | orchestrator | 2026-01-05 03:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:55.383206 | orchestrator | 2026-01-05 03:35:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:55.384256 | orchestrator | 2026-01-05 03:35:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:55.384492 | orchestrator | 2026-01-05 03:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:35:58.433792 | orchestrator | 2026-01-05 03:35:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:35:58.435840 | orchestrator | 2026-01-05 03:35:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:35:58.435970 | orchestrator | 2026-01-05 03:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:01.487737 | orchestrator | 2026-01-05 03:36:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:01.489906 | orchestrator | 2026-01-05 03:36:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:01.489956 | orchestrator | 2026-01-05 03:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:04.541694 | orchestrator | 2026-01-05 03:36:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:04.544060 | orchestrator | 2026-01-05 03:36:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:04.544121 | orchestrator | 2026-01-05 03:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:07.595027 | orchestrator | 2026-01-05 03:36:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:07.597977 | orchestrator | 2026-01-05 03:36:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:07.598161 | orchestrator | 2026-01-05 03:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:10.646614 | orchestrator | 2026-01-05 03:36:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:10.648227 | orchestrator | 2026-01-05 03:36:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:10.648385 | orchestrator | 2026-01-05 03:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:13.698988 | orchestrator | 2026-01-05 03:36:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:13.700794 | orchestrator | 2026-01-05 03:36:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:13.700854 | orchestrator | 2026-01-05 03:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:16.748935 | orchestrator | 2026-01-05 03:36:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:16.750587 | orchestrator | 2026-01-05 03:36:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:16.750911 | orchestrator | 2026-01-05 03:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:19.800080 | orchestrator | 2026-01-05 03:36:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:19.801452 | orchestrator | 2026-01-05 03:36:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:19.801528 | orchestrator | 2026-01-05 03:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:22.855105 | orchestrator | 2026-01-05 03:36:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:22.856943 | orchestrator | 2026-01-05 03:36:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:22.856988 | orchestrator | 2026-01-05 03:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:25.909526 | orchestrator | 2026-01-05 03:36:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:25.910539 | orchestrator | 2026-01-05 03:36:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:25.910785 | orchestrator | 2026-01-05 03:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:28.962237 | orchestrator | 2026-01-05 03:36:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:28.965542 | orchestrator | 2026-01-05 03:36:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:28.965625 | orchestrator | 2026-01-05 03:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:32.013407 | orchestrator | 2026-01-05 03:36:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:32.014330 | orchestrator | 2026-01-05 03:36:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:32.014773 | orchestrator | 2026-01-05 03:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:35.062210 | orchestrator | 2026-01-05 03:36:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:35.063901 | orchestrator | 2026-01-05 03:36:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:35.063966 | orchestrator | 2026-01-05 03:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:38.115515 | orchestrator | 2026-01-05 03:36:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:38.115831 | orchestrator | 2026-01-05 03:36:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:38.117593 | orchestrator | 2026-01-05 03:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:41.164569 | orchestrator | 2026-01-05 03:36:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:41.166122 | orchestrator | 2026-01-05 03:36:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:41.166163 | orchestrator | 2026-01-05 03:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:44.215099 | orchestrator | 2026-01-05 03:36:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:44.215873 | orchestrator | 2026-01-05 03:36:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:44.215926 | orchestrator | 2026-01-05 03:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:47.268969 | orchestrator | 2026-01-05 03:36:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:47.270136 | orchestrator | 2026-01-05 03:36:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:47.270281 | orchestrator | 2026-01-05 03:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:50.313469 | orchestrator | 2026-01-05 03:36:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:50.315862 | orchestrator | 2026-01-05 03:36:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:50.315927 | orchestrator | 2026-01-05 03:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:53.365730 | orchestrator | 2026-01-05 03:36:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:53.367544 | orchestrator | 2026-01-05 03:36:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:53.367589 | orchestrator | 2026-01-05 03:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:56.423631 | orchestrator | 2026-01-05 03:36:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:56.425298 | orchestrator | 2026-01-05 03:36:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:56.425354 | orchestrator | 2026-01-05 03:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:36:59.483454 | orchestrator | 2026-01-05 03:36:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:36:59.485466 | orchestrator | 2026-01-05 03:36:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:36:59.485985 | orchestrator | 2026-01-05 03:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:02.533814 | orchestrator | 2026-01-05 03:37:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:02.534877 | orchestrator | 2026-01-05 03:37:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:02.534920 | orchestrator | 2026-01-05 03:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:05.583790 | orchestrator | 2026-01-05 03:37:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:05.585918 | orchestrator | 2026-01-05 03:37:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:05.586462 | orchestrator | 2026-01-05 03:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:08.635784 | orchestrator | 2026-01-05 03:37:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:08.638219 | orchestrator | 2026-01-05 03:37:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:08.638313 | orchestrator | 2026-01-05 03:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:11.681512 | orchestrator | 2026-01-05 03:37:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:11.683707 | orchestrator | 2026-01-05 03:37:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:11.683883 | orchestrator | 2026-01-05 03:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:14.729449 | orchestrator | 2026-01-05 03:37:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:14.731069 | orchestrator | 2026-01-05 03:37:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:14.731135 | orchestrator | 2026-01-05 03:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:17.776398 | orchestrator | 2026-01-05 03:37:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:17.778224 | orchestrator | 2026-01-05 03:37:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:17.778279 | orchestrator | 2026-01-05 03:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:20.832679 | orchestrator | 2026-01-05 03:37:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:20.836828 | orchestrator | 2026-01-05 03:37:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:20.837643 | orchestrator | 2026-01-05 03:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:23.889099 | orchestrator | 2026-01-05 03:37:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:23.892566 | orchestrator | 2026-01-05 03:37:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:23.892700 | orchestrator | 2026-01-05 03:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:26.946412 | orchestrator | 2026-01-05 03:37:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:26.949289 | orchestrator | 2026-01-05 03:37:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:26.949355 | orchestrator | 2026-01-05 03:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:30.007872 | orchestrator | 2026-01-05 03:37:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:30.009710 | orchestrator | 2026-01-05 03:37:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:30.009805 | orchestrator | 2026-01-05 03:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:33.067212 | orchestrator | 2026-01-05 03:37:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:33.069409 | orchestrator | 2026-01-05 03:37:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:33.069565 | orchestrator | 2026-01-05 03:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:36.115947 | orchestrator | 2026-01-05 03:37:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:36.117610 | orchestrator | 2026-01-05 03:37:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:36.117669 | orchestrator | 2026-01-05 03:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:39.157922 | orchestrator | 2026-01-05 03:37:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:39.159514 | orchestrator | 2026-01-05 03:37:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:39.159669 | orchestrator | 2026-01-05 03:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:42.204713 | orchestrator | 2026-01-05 03:37:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:42.205403 | orchestrator | 2026-01-05 03:37:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:42.205701 | orchestrator | 2026-01-05 03:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:45.252176 | orchestrator | 2026-01-05 03:37:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:45.254770 | orchestrator | 2026-01-05 03:37:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:45.254820 | orchestrator | 2026-01-05 03:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:48.312938 | orchestrator | 2026-01-05 03:37:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:48.315039 | orchestrator | 2026-01-05 03:37:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:48.315450 | orchestrator | 2026-01-05 03:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:51.361808 | orchestrator | 2026-01-05 03:37:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:51.363105 | orchestrator | 2026-01-05 03:37:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:51.363188 | orchestrator | 2026-01-05 03:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:54.417889 | orchestrator | 2026-01-05 03:37:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:54.418992 | orchestrator | 2026-01-05 03:37:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:54.419038 | orchestrator | 2026-01-05 03:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:37:57.469199 | orchestrator | 2026-01-05 03:37:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:37:57.471636 | orchestrator | 2026-01-05 03:37:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:37:57.471694 | orchestrator | 2026-01-05 03:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:00.528231 | orchestrator | 2026-01-05 03:38:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:00.530872 | orchestrator | 2026-01-05 03:38:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:00.530945 | orchestrator | 2026-01-05 03:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:03.583607 | orchestrator | 2026-01-05 03:38:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:03.585976 | orchestrator | 2026-01-05 03:38:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:03.586248 | orchestrator | 2026-01-05 03:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:06.632413 | orchestrator | 2026-01-05 03:38:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:06.633901 | orchestrator | 2026-01-05 03:38:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:06.633945 | orchestrator | 2026-01-05 03:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:09.682228 | orchestrator | 2026-01-05 03:38:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:09.684509 | orchestrator | 2026-01-05 03:38:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:09.684561 | orchestrator | 2026-01-05 03:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:12.734672 | orchestrator | 2026-01-05 03:38:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:12.736520 | orchestrator | 2026-01-05 03:38:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:12.737097 | orchestrator | 2026-01-05 03:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:15.789330 | orchestrator | 2026-01-05 03:38:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:15.790645 | orchestrator | 2026-01-05 03:38:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:15.790671 | orchestrator | 2026-01-05 03:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:18.836570 | orchestrator | 2026-01-05 03:38:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:18.837032 | orchestrator | 2026-01-05 03:38:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:18.837055 | orchestrator | 2026-01-05 03:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:21.884907 | orchestrator | 2026-01-05 03:38:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:21.887552 | orchestrator | 2026-01-05 03:38:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:21.887621 | orchestrator | 2026-01-05 03:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:24.933932 | orchestrator | 2026-01-05 03:38:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:24.935227 | orchestrator | 2026-01-05 03:38:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:24.935283 | orchestrator | 2026-01-05 03:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:27.982158 | orchestrator | 2026-01-05 03:38:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:27.983211 | orchestrator | 2026-01-05 03:38:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:27.983274 | orchestrator | 2026-01-05 03:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:31.044001 | orchestrator | 2026-01-05 03:38:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:31.045152 | orchestrator | 2026-01-05 03:38:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:31.045228 | orchestrator | 2026-01-05 03:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:34.098807 | orchestrator | 2026-01-05 03:38:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:34.100723 | orchestrator | 2026-01-05 03:38:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:34.100770 | orchestrator | 2026-01-05 03:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:37.152822 | orchestrator | 2026-01-05 03:38:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:37.153623 | orchestrator | 2026-01-05 03:38:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:37.153646 | orchestrator | 2026-01-05 03:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:40.198474 | orchestrator | 2026-01-05 03:38:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:40.201167 | orchestrator | 2026-01-05 03:38:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:40.201345 | orchestrator | 2026-01-05 03:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:43.246318 | orchestrator | 2026-01-05 03:38:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:43.249997 | orchestrator | 2026-01-05 03:38:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:43.250134 | orchestrator | 2026-01-05 03:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:46.286710 | orchestrator | 2026-01-05 03:38:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:46.287673 | orchestrator | 2026-01-05 03:38:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:46.287738 | orchestrator | 2026-01-05 03:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:49.345315 | orchestrator | 2026-01-05 03:38:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:49.345588 | orchestrator | 2026-01-05 03:38:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:49.345794 | orchestrator | 2026-01-05 03:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:52.393678 | orchestrator | 2026-01-05 03:38:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:52.395336 | orchestrator | 2026-01-05 03:38:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:52.395387 | orchestrator | 2026-01-05 03:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:55.443545 | orchestrator | 2026-01-05 03:38:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:55.445721 | orchestrator | 2026-01-05 03:38:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:55.446166 | orchestrator | 2026-01-05 03:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:38:58.499338 | orchestrator | 2026-01-05 03:38:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:38:58.501580 | orchestrator | 2026-01-05 03:38:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:38:58.501644 | orchestrator | 2026-01-05 03:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:01.555688 | orchestrator | 2026-01-05 03:39:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:01.560536 | orchestrator | 2026-01-05 03:39:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:01.560724 | orchestrator | 2026-01-05 03:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:04.614266 | orchestrator | 2026-01-05 03:39:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:04.615510 | orchestrator | 2026-01-05 03:39:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:04.615674 | orchestrator | 2026-01-05 03:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:07.666127 | orchestrator | 2026-01-05 03:39:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:07.667243 | orchestrator | 2026-01-05 03:39:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:07.667328 | orchestrator | 2026-01-05 03:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:10.718860 | orchestrator | 2026-01-05 03:39:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:10.720739 | orchestrator | 2026-01-05 03:39:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:10.721046 | orchestrator | 2026-01-05 03:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:13.771139 | orchestrator | 2026-01-05 03:39:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:13.772580 | orchestrator | 2026-01-05 03:39:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:13.772607 | orchestrator | 2026-01-05 03:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:16.815715 | orchestrator | 2026-01-05 03:39:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:16.816359 | orchestrator | 2026-01-05 03:39:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:16.816450 | orchestrator | 2026-01-05 03:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:19.871337 | orchestrator | 2026-01-05 03:39:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:19.874288 | orchestrator | 2026-01-05 03:39:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:19.874384 | orchestrator | 2026-01-05 03:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:22.922143 | orchestrator | 2026-01-05 03:39:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:22.923119 | orchestrator | 2026-01-05 03:39:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:22.923167 | orchestrator | 2026-01-05 03:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:25.972329 | orchestrator | 2026-01-05 03:39:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:25.973535 | orchestrator | 2026-01-05 03:39:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:25.973617 | orchestrator | 2026-01-05 03:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:29.027563 | orchestrator | 2026-01-05 03:39:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:29.027692 | orchestrator | 2026-01-05 03:39:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:29.027705 | orchestrator | 2026-01-05 03:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:32.072515 | orchestrator | 2026-01-05 03:39:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:32.073591 | orchestrator | 2026-01-05 03:39:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:32.073656 | orchestrator | 2026-01-05 03:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:35.121574 | orchestrator | 2026-01-05 03:39:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:35.124533 | orchestrator | 2026-01-05 03:39:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:35.124931 | orchestrator | 2026-01-05 03:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:38.165853 | orchestrator | 2026-01-05 03:39:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:38.166154 | orchestrator | 2026-01-05 03:39:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:38.166179 | orchestrator | 2026-01-05 03:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:41.213483 | orchestrator | 2026-01-05 03:39:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:41.215579 | orchestrator | 2026-01-05 03:39:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:41.216426 | orchestrator | 2026-01-05 03:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:44.266393 | orchestrator | 2026-01-05 03:39:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:44.267153 | orchestrator | 2026-01-05 03:39:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:44.267200 | orchestrator | 2026-01-05 03:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:47.330685 | orchestrator | 2026-01-05 03:39:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:47.333787 | orchestrator | 2026-01-05 03:39:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:47.333844 | orchestrator | 2026-01-05 03:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:50.385521 | orchestrator | 2026-01-05 03:39:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:50.388402 | orchestrator | 2026-01-05 03:39:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:50.388516 | orchestrator | 2026-01-05 03:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:53.436927 | orchestrator | 2026-01-05 03:39:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:53.438878 | orchestrator | 2026-01-05 03:39:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:53.438932 | orchestrator | 2026-01-05 03:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:56.484504 | orchestrator | 2026-01-05 03:39:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:56.486513 | orchestrator | 2026-01-05 03:39:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:56.486586 | orchestrator | 2026-01-05 03:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:39:59.536735 | orchestrator | 2026-01-05 03:39:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:39:59.539370 | orchestrator | 2026-01-05 03:39:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:39:59.539897 | orchestrator | 2026-01-05 03:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:02.589236 | orchestrator | 2026-01-05 03:40:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:02.592415 | orchestrator | 2026-01-05 03:40:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:02.592690 | orchestrator | 2026-01-05 03:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:05.645041 | orchestrator | 2026-01-05 03:40:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:05.646567 | orchestrator | 2026-01-05 03:40:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:05.646610 | orchestrator | 2026-01-05 03:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:08.694725 | orchestrator | 2026-01-05 03:40:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:08.695884 | orchestrator | 2026-01-05 03:40:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:08.695920 | orchestrator | 2026-01-05 03:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:11.748968 | orchestrator | 2026-01-05 03:40:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:11.751031 | orchestrator | 2026-01-05 03:40:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:11.751123 | orchestrator | 2026-01-05 03:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:14.797662 | orchestrator | 2026-01-05 03:40:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:14.799191 | orchestrator | 2026-01-05 03:40:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:14.799219 | orchestrator | 2026-01-05 03:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:17.843024 | orchestrator | 2026-01-05 03:40:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:17.844746 | orchestrator | 2026-01-05 03:40:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:17.844780 | orchestrator | 2026-01-05 03:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:20.896740 | orchestrator | 2026-01-05 03:40:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:20.898552 | orchestrator | 2026-01-05 03:40:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:20.898590 | orchestrator | 2026-01-05 03:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:23.946891 | orchestrator | 2026-01-05 03:40:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:23.948438 | orchestrator | 2026-01-05 03:40:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:23.948481 | orchestrator | 2026-01-05 03:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:26.995196 | orchestrator | 2026-01-05 03:40:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:26.996956 | orchestrator | 2026-01-05 03:40:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:26.997001 | orchestrator | 2026-01-05 03:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:30.045872 | orchestrator | 2026-01-05 03:40:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:30.047347 | orchestrator | 2026-01-05 03:40:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:30.047680 | orchestrator | 2026-01-05 03:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:33.097274 | orchestrator | 2026-01-05 03:40:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:33.099776 | orchestrator | 2026-01-05 03:40:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:33.100306 | orchestrator | 2026-01-05 03:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:36.153049 | orchestrator | 2026-01-05 03:40:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:36.154446 | orchestrator | 2026-01-05 03:40:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:36.154490 | orchestrator | 2026-01-05 03:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:39.198813 | orchestrator | 2026-01-05 03:40:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:39.200972 | orchestrator | 2026-01-05 03:40:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:39.201609 | orchestrator | 2026-01-05 03:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:42.253128 | orchestrator | 2026-01-05 03:40:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:42.255290 | orchestrator | 2026-01-05 03:40:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:42.255409 | orchestrator | 2026-01-05 03:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:45.304027 | orchestrator | 2026-01-05 03:40:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:45.307106 | orchestrator | 2026-01-05 03:40:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:45.307238 | orchestrator | 2026-01-05 03:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:48.347288 | orchestrator | 2026-01-05 03:40:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:48.348068 | orchestrator | 2026-01-05 03:40:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:48.348125 | orchestrator | 2026-01-05 03:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:51.401034 | orchestrator | 2026-01-05 03:40:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:51.402538 | orchestrator | 2026-01-05 03:40:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:51.402574 | orchestrator | 2026-01-05 03:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:54.445107 | orchestrator | 2026-01-05 03:40:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:54.446738 | orchestrator | 2026-01-05 03:40:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:54.446798 | orchestrator | 2026-01-05 03:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:40:57.501551 | orchestrator | 2026-01-05 03:40:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:40:57.503292 | orchestrator | 2026-01-05 03:40:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:40:57.503391 | orchestrator | 2026-01-05 03:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:00.559449 | orchestrator | 2026-01-05 03:41:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:00.560706 | orchestrator | 2026-01-05 03:41:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:00.560797 | orchestrator | 2026-01-05 03:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:03.608715 | orchestrator | 2026-01-05 03:41:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:03.612069 | orchestrator | 2026-01-05 03:41:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:03.612247 | orchestrator | 2026-01-05 03:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:06.657168 | orchestrator | 2026-01-05 03:41:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:06.658445 | orchestrator | 2026-01-05 03:41:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:06.658512 | orchestrator | 2026-01-05 03:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:09.713786 | orchestrator | 2026-01-05 03:41:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:09.715427 | orchestrator | 2026-01-05 03:41:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:09.715498 | orchestrator | 2026-01-05 03:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:12.763265 | orchestrator | 2026-01-05 03:41:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:12.765078 | orchestrator | 2026-01-05 03:41:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:12.765291 | orchestrator | 2026-01-05 03:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:15.815589 | orchestrator | 2026-01-05 03:41:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:15.818511 | orchestrator | 2026-01-05 03:41:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:15.818588 | orchestrator | 2026-01-05 03:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:18.868952 | orchestrator | 2026-01-05 03:41:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:18.870778 | orchestrator | 2026-01-05 03:41:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:18.870993 | orchestrator | 2026-01-05 03:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:21.914568 | orchestrator | 2026-01-05 03:41:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:21.915929 | orchestrator | 2026-01-05 03:41:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:21.915956 | orchestrator | 2026-01-05 03:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:24.968111 | orchestrator | 2026-01-05 03:41:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:24.968781 | orchestrator | 2026-01-05 03:41:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:24.968821 | orchestrator | 2026-01-05 03:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:28.018647 | orchestrator | 2026-01-05 03:41:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:28.021570 | orchestrator | 2026-01-05 03:41:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:28.021625 | orchestrator | 2026-01-05 03:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:31.066221 | orchestrator | 2026-01-05 03:41:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:31.067473 | orchestrator | 2026-01-05 03:41:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:31.067571 | orchestrator | 2026-01-05 03:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:34.107859 | orchestrator | 2026-01-05 03:41:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:34.109849 | orchestrator | 2026-01-05 03:41:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:34.110088 | orchestrator | 2026-01-05 03:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:37.162092 | orchestrator | 2026-01-05 03:41:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:37.163364 | orchestrator | 2026-01-05 03:41:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:37.163401 | orchestrator | 2026-01-05 03:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:40.203665 | orchestrator | 2026-01-05 03:41:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:40.205517 | orchestrator | 2026-01-05 03:41:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:40.205672 | orchestrator | 2026-01-05 03:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:43.252521 | orchestrator | 2026-01-05 03:41:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:43.254060 | orchestrator | 2026-01-05 03:41:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:43.254166 | orchestrator | 2026-01-05 03:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:46.303056 | orchestrator | 2026-01-05 03:41:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:46.304533 | orchestrator | 2026-01-05 03:41:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:46.304589 | orchestrator | 2026-01-05 03:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:49.350573 | orchestrator | 2026-01-05 03:41:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:49.352420 | orchestrator | 2026-01-05 03:41:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:49.352471 | orchestrator | 2026-01-05 03:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:52.403361 | orchestrator | 2026-01-05 03:41:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:52.405651 | orchestrator | 2026-01-05 03:41:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:52.405706 | orchestrator | 2026-01-05 03:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:55.459701 | orchestrator | 2026-01-05 03:41:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:55.462592 | orchestrator | 2026-01-05 03:41:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:55.462695 | orchestrator | 2026-01-05 03:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:41:58.513800 | orchestrator | 2026-01-05 03:41:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:41:58.515058 | orchestrator | 2026-01-05 03:41:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:41:58.515114 | orchestrator | 2026-01-05 03:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:01.559286 | orchestrator | 2026-01-05 03:42:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:01.560162 | orchestrator | 2026-01-05 03:42:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:01.560197 | orchestrator | 2026-01-05 03:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:04.608947 | orchestrator | 2026-01-05 03:42:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:04.611260 | orchestrator | 2026-01-05 03:42:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:04.611309 | orchestrator | 2026-01-05 03:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:07.660821 | orchestrator | 2026-01-05 03:42:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:07.662192 | orchestrator | 2026-01-05 03:42:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:07.662296 | orchestrator | 2026-01-05 03:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:10.705118 | orchestrator | 2026-01-05 03:42:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:10.707527 | orchestrator | 2026-01-05 03:42:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:10.707605 | orchestrator | 2026-01-05 03:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:13.758480 | orchestrator | 2026-01-05 03:42:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:13.759390 | orchestrator | 2026-01-05 03:42:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:13.759430 | orchestrator | 2026-01-05 03:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:16.804192 | orchestrator | 2026-01-05 03:42:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:16.805659 | orchestrator | 2026-01-05 03:42:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:16.805735 | orchestrator | 2026-01-05 03:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:19.859838 | orchestrator | 2026-01-05 03:42:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:19.862776 | orchestrator | 2026-01-05 03:42:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:19.862815 | orchestrator | 2026-01-05 03:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:22.908526 | orchestrator | 2026-01-05 03:42:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:22.909029 | orchestrator | 2026-01-05 03:42:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:22.909064 | orchestrator | 2026-01-05 03:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:25.958999 | orchestrator | 2026-01-05 03:42:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:25.960580 | orchestrator | 2026-01-05 03:42:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:25.960624 | orchestrator | 2026-01-05 03:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:29.021096 | orchestrator | 2026-01-05 03:42:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:29.024240 | orchestrator | 2026-01-05 03:42:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:29.024317 | orchestrator | 2026-01-05 03:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:32.068592 | orchestrator | 2026-01-05 03:42:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:32.070498 | orchestrator | 2026-01-05 03:42:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:32.070575 | orchestrator | 2026-01-05 03:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:35.118582 | orchestrator | 2026-01-05 03:42:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:35.119308 | orchestrator | 2026-01-05 03:42:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:35.119433 | orchestrator | 2026-01-05 03:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:38.163206 | orchestrator | 2026-01-05 03:42:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:38.164554 | orchestrator | 2026-01-05 03:42:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:38.164631 | orchestrator | 2026-01-05 03:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:41.217610 | orchestrator | 2026-01-05 03:42:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:41.218664 | orchestrator | 2026-01-05 03:42:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:41.218707 | orchestrator | 2026-01-05 03:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:44.266754 | orchestrator | 2026-01-05 03:42:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:44.269010 | orchestrator | 2026-01-05 03:42:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:44.269245 | orchestrator | 2026-01-05 03:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:47.321854 | orchestrator | 2026-01-05 03:42:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:47.326182 | orchestrator | 2026-01-05 03:42:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:47.326304 | orchestrator | 2026-01-05 03:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:50.373711 | orchestrator | 2026-01-05 03:42:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:50.374879 | orchestrator | 2026-01-05 03:42:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:50.374924 | orchestrator | 2026-01-05 03:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:53.426200 | orchestrator | 2026-01-05 03:42:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:53.427679 | orchestrator | 2026-01-05 03:42:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:53.427734 | orchestrator | 2026-01-05 03:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:56.477503 | orchestrator | 2026-01-05 03:42:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:56.479365 | orchestrator | 2026-01-05 03:42:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:56.479454 | orchestrator | 2026-01-05 03:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:42:59.527987 | orchestrator | 2026-01-05 03:42:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:42:59.529261 | orchestrator | 2026-01-05 03:42:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:42:59.529353 | orchestrator | 2026-01-05 03:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:02.581891 | orchestrator | 2026-01-05 03:43:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:02.583510 | orchestrator | 2026-01-05 03:43:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:02.583655 | orchestrator | 2026-01-05 03:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:05.635364 | orchestrator | 2026-01-05 03:43:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:05.637517 | orchestrator | 2026-01-05 03:43:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:05.637602 | orchestrator | 2026-01-05 03:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:08.692891 | orchestrator | 2026-01-05 03:43:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:08.694690 | orchestrator | 2026-01-05 03:43:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:08.694758 | orchestrator | 2026-01-05 03:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:11.741989 | orchestrator | 2026-01-05 03:43:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:11.743814 | orchestrator | 2026-01-05 03:43:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:11.743889 | orchestrator | 2026-01-05 03:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:14.790002 | orchestrator | 2026-01-05 03:43:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:14.791632 | orchestrator | 2026-01-05 03:43:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:14.791693 | orchestrator | 2026-01-05 03:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:17.836994 | orchestrator | 2026-01-05 03:43:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:17.838890 | orchestrator | 2026-01-05 03:43:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:17.839092 | orchestrator | 2026-01-05 03:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:20.885006 | orchestrator | 2026-01-05 03:43:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:20.886737 | orchestrator | 2026-01-05 03:43:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:20.886820 | orchestrator | 2026-01-05 03:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:23.935886 | orchestrator | 2026-01-05 03:43:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:23.938085 | orchestrator | 2026-01-05 03:43:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:23.938151 | orchestrator | 2026-01-05 03:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:26.984630 | orchestrator | 2026-01-05 03:43:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:26.986928 | orchestrator | 2026-01-05 03:43:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:26.987072 | orchestrator | 2026-01-05 03:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:30.047070 | orchestrator | 2026-01-05 03:43:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:30.049190 | orchestrator | 2026-01-05 03:43:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:30.049243 | orchestrator | 2026-01-05 03:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:33.089839 | orchestrator | 2026-01-05 03:43:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:33.090959 | orchestrator | 2026-01-05 03:43:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:33.091303 | orchestrator | 2026-01-05 03:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:36.137684 | orchestrator | 2026-01-05 03:43:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:36.140009 | orchestrator | 2026-01-05 03:43:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:36.140095 | orchestrator | 2026-01-05 03:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:39.183394 | orchestrator | 2026-01-05 03:43:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:39.185113 | orchestrator | 2026-01-05 03:43:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:39.185458 | orchestrator | 2026-01-05 03:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:42.228965 | orchestrator | 2026-01-05 03:43:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:42.231194 | orchestrator | 2026-01-05 03:43:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:42.231333 | orchestrator | 2026-01-05 03:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:45.284514 | orchestrator | 2026-01-05 03:43:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:45.286522 | orchestrator | 2026-01-05 03:43:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:45.286640 | orchestrator | 2026-01-05 03:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:48.326963 | orchestrator | 2026-01-05 03:43:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:48.328670 | orchestrator | 2026-01-05 03:43:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:48.328713 | orchestrator | 2026-01-05 03:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:51.370241 | orchestrator | 2026-01-05 03:43:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:51.372862 | orchestrator | 2026-01-05 03:43:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:51.372896 | orchestrator | 2026-01-05 03:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:54.423873 | orchestrator | 2026-01-05 03:43:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:54.425312 | orchestrator | 2026-01-05 03:43:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:54.425390 | orchestrator | 2026-01-05 03:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:43:57.467178 | orchestrator | 2026-01-05 03:43:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:43:57.468532 | orchestrator | 2026-01-05 03:43:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:43:57.468663 | orchestrator | 2026-01-05 03:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:00.523732 | orchestrator | 2026-01-05 03:44:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:00.525501 | orchestrator | 2026-01-05 03:44:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:00.525577 | orchestrator | 2026-01-05 03:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:03.575869 | orchestrator | 2026-01-05 03:44:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:03.577945 | orchestrator | 2026-01-05 03:44:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:03.578000 | orchestrator | 2026-01-05 03:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:06.623181 | orchestrator | 2026-01-05 03:44:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:06.624729 | orchestrator | 2026-01-05 03:44:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:06.624795 | orchestrator | 2026-01-05 03:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:09.672333 | orchestrator | 2026-01-05 03:44:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:09.674134 | orchestrator | 2026-01-05 03:44:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:09.674174 | orchestrator | 2026-01-05 03:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:12.720527 | orchestrator | 2026-01-05 03:44:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:12.722483 | orchestrator | 2026-01-05 03:44:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:12.722781 | orchestrator | 2026-01-05 03:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:15.774669 | orchestrator | 2026-01-05 03:44:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:15.776232 | orchestrator | 2026-01-05 03:44:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:15.776284 | orchestrator | 2026-01-05 03:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:18.820696 | orchestrator | 2026-01-05 03:44:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:18.823097 | orchestrator | 2026-01-05 03:44:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:18.823131 | orchestrator | 2026-01-05 03:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:21.867225 | orchestrator | 2026-01-05 03:44:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:21.869871 | orchestrator | 2026-01-05 03:44:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:21.869995 | orchestrator | 2026-01-05 03:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:24.915923 | orchestrator | 2026-01-05 03:44:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:24.918291 | orchestrator | 2026-01-05 03:44:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:24.918347 | orchestrator | 2026-01-05 03:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:27.975351 | orchestrator | 2026-01-05 03:44:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:27.978514 | orchestrator | 2026-01-05 03:44:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:27.978689 | orchestrator | 2026-01-05 03:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:31.025352 | orchestrator | 2026-01-05 03:44:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:31.026871 | orchestrator | 2026-01-05 03:44:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:31.026958 | orchestrator | 2026-01-05 03:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:34.066011 | orchestrator | 2026-01-05 03:44:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:34.068244 | orchestrator | 2026-01-05 03:44:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:34.068370 | orchestrator | 2026-01-05 03:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:37.117314 | orchestrator | 2026-01-05 03:44:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:37.119974 | orchestrator | 2026-01-05 03:44:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:37.120043 | orchestrator | 2026-01-05 03:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:40.164842 | orchestrator | 2026-01-05 03:44:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:40.167645 | orchestrator | 2026-01-05 03:44:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:40.167773 | orchestrator | 2026-01-05 03:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:43.215386 | orchestrator | 2026-01-05 03:44:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:43.216176 | orchestrator | 2026-01-05 03:44:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:43.216211 | orchestrator | 2026-01-05 03:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:46.268155 | orchestrator | 2026-01-05 03:44:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:46.269843 | orchestrator | 2026-01-05 03:44:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:46.269874 | orchestrator | 2026-01-05 03:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:49.308802 | orchestrator | 2026-01-05 03:44:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:49.310619 | orchestrator | 2026-01-05 03:44:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:49.310671 | orchestrator | 2026-01-05 03:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:52.358395 | orchestrator | 2026-01-05 03:44:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:52.359875 | orchestrator | 2026-01-05 03:44:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:52.359963 | orchestrator | 2026-01-05 03:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:55.408971 | orchestrator | 2026-01-05 03:44:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:55.410285 | orchestrator | 2026-01-05 03:44:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:55.410318 | orchestrator | 2026-01-05 03:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:44:58.458687 | orchestrator | 2026-01-05 03:44:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:44:58.460374 | orchestrator | 2026-01-05 03:44:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:44:58.460720 | orchestrator | 2026-01-05 03:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:01.512977 | orchestrator | 2026-01-05 03:45:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:01.514620 | orchestrator | 2026-01-05 03:45:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:01.514725 | orchestrator | 2026-01-05 03:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:04.565793 | orchestrator | 2026-01-05 03:45:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:04.567040 | orchestrator | 2026-01-05 03:45:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:04.567084 | orchestrator | 2026-01-05 03:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:07.606414 | orchestrator | 2026-01-05 03:45:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:07.608628 | orchestrator | 2026-01-05 03:45:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:07.608672 | orchestrator | 2026-01-05 03:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:10.662322 | orchestrator | 2026-01-05 03:45:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:10.663552 | orchestrator | 2026-01-05 03:45:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:10.663651 | orchestrator | 2026-01-05 03:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:13.718076 | orchestrator | 2026-01-05 03:45:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:13.719868 | orchestrator | 2026-01-05 03:45:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:13.719945 | orchestrator | 2026-01-05 03:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:16.773413 | orchestrator | 2026-01-05 03:45:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:16.775326 | orchestrator | 2026-01-05 03:45:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:16.775383 | orchestrator | 2026-01-05 03:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:19.819603 | orchestrator | 2026-01-05 03:45:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:19.822557 | orchestrator | 2026-01-05 03:45:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:19.823097 | orchestrator | 2026-01-05 03:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:22.868463 | orchestrator | 2026-01-05 03:45:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:22.871065 | orchestrator | 2026-01-05 03:45:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:22.871328 | orchestrator | 2026-01-05 03:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:25.922328 | orchestrator | 2026-01-05 03:45:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:25.923827 | orchestrator | 2026-01-05 03:45:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:25.923869 | orchestrator | 2026-01-05 03:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:28.971939 | orchestrator | 2026-01-05 03:45:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:28.973577 | orchestrator | 2026-01-05 03:45:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:28.973646 | orchestrator | 2026-01-05 03:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:32.019421 | orchestrator | 2026-01-05 03:45:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:32.021362 | orchestrator | 2026-01-05 03:45:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:32.021421 | orchestrator | 2026-01-05 03:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:35.072077 | orchestrator | 2026-01-05 03:45:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:35.073795 | orchestrator | 2026-01-05 03:45:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:35.074089 | orchestrator | 2026-01-05 03:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:38.121202 | orchestrator | 2026-01-05 03:45:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:38.122961 | orchestrator | 2026-01-05 03:45:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:38.123031 | orchestrator | 2026-01-05 03:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:41.168451 | orchestrator | 2026-01-05 03:45:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:41.169693 | orchestrator | 2026-01-05 03:45:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:41.169862 | orchestrator | 2026-01-05 03:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:44.216136 | orchestrator | 2026-01-05 03:45:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:44.218625 | orchestrator | 2026-01-05 03:45:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:44.218711 | orchestrator | 2026-01-05 03:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:47.264320 | orchestrator | 2026-01-05 03:45:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:47.265448 | orchestrator | 2026-01-05 03:45:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:47.265474 | orchestrator | 2026-01-05 03:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:50.311317 | orchestrator | 2026-01-05 03:45:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:50.313124 | orchestrator | 2026-01-05 03:45:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:50.313223 | orchestrator | 2026-01-05 03:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:53.357474 | orchestrator | 2026-01-05 03:45:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:53.359918 | orchestrator | 2026-01-05 03:45:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:53.360007 | orchestrator | 2026-01-05 03:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:56.404056 | orchestrator | 2026-01-05 03:45:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:56.406406 | orchestrator | 2026-01-05 03:45:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:56.406510 | orchestrator | 2026-01-05 03:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:45:59.458728 | orchestrator | 2026-01-05 03:45:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:45:59.461187 | orchestrator | 2026-01-05 03:45:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:45:59.461235 | orchestrator | 2026-01-05 03:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:02.510267 | orchestrator | 2026-01-05 03:46:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:02.512293 | orchestrator | 2026-01-05 03:46:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:02.512343 | orchestrator | 2026-01-05 03:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:05.567094 | orchestrator | 2026-01-05 03:46:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:05.568426 | orchestrator | 2026-01-05 03:46:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:05.568453 | orchestrator | 2026-01-05 03:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:08.608526 | orchestrator | 2026-01-05 03:46:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:08.610578 | orchestrator | 2026-01-05 03:46:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:08.610787 | orchestrator | 2026-01-05 03:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:11.652091 | orchestrator | 2026-01-05 03:46:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:11.653051 | orchestrator | 2026-01-05 03:46:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:11.653081 | orchestrator | 2026-01-05 03:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:14.704271 | orchestrator | 2026-01-05 03:46:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:14.707811 | orchestrator | 2026-01-05 03:46:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:14.708025 | orchestrator | 2026-01-05 03:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:17.758343 | orchestrator | 2026-01-05 03:46:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:17.758915 | orchestrator | 2026-01-05 03:46:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:17.758963 | orchestrator | 2026-01-05 03:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:20.814555 | orchestrator | 2026-01-05 03:46:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:20.817212 | orchestrator | 2026-01-05 03:46:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:20.817611 | orchestrator | 2026-01-05 03:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:23.864965 | orchestrator | 2026-01-05 03:46:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:23.866466 | orchestrator | 2026-01-05 03:46:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:23.866557 | orchestrator | 2026-01-05 03:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:26.915585 | orchestrator | 2026-01-05 03:46:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:26.917515 | orchestrator | 2026-01-05 03:46:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:26.917542 | orchestrator | 2026-01-05 03:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:29.968069 | orchestrator | 2026-01-05 03:46:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:29.970165 | orchestrator | 2026-01-05 03:46:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:29.970253 | orchestrator | 2026-01-05 03:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:33.023907 | orchestrator | 2026-01-05 03:46:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:33.026410 | orchestrator | 2026-01-05 03:46:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:33.026476 | orchestrator | 2026-01-05 03:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:36.081322 | orchestrator | 2026-01-05 03:46:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:36.082576 | orchestrator | 2026-01-05 03:46:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:36.082660 | orchestrator | 2026-01-05 03:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:39.120644 | orchestrator | 2026-01-05 03:46:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:39.121395 | orchestrator | 2026-01-05 03:46:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:39.121439 | orchestrator | 2026-01-05 03:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:42.168692 | orchestrator | 2026-01-05 03:46:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:42.169221 | orchestrator | 2026-01-05 03:46:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:42.169289 | orchestrator | 2026-01-05 03:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:45.211213 | orchestrator | 2026-01-05 03:46:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:45.213248 | orchestrator | 2026-01-05 03:46:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:45.213300 | orchestrator | 2026-01-05 03:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:48.261271 | orchestrator | 2026-01-05 03:46:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:48.262953 | orchestrator | 2026-01-05 03:46:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:48.262992 | orchestrator | 2026-01-05 03:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:51.308702 | orchestrator | 2026-01-05 03:46:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:51.311232 | orchestrator | 2026-01-05 03:46:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:51.311287 | orchestrator | 2026-01-05 03:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:54.354834 | orchestrator | 2026-01-05 03:46:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:54.357339 | orchestrator | 2026-01-05 03:46:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:54.357410 | orchestrator | 2026-01-05 03:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:46:57.405426 | orchestrator | 2026-01-05 03:46:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:46:57.407882 | orchestrator | 2026-01-05 03:46:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:46:57.407980 | orchestrator | 2026-01-05 03:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:00.462128 | orchestrator | 2026-01-05 03:47:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:00.462684 | orchestrator | 2026-01-05 03:47:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:00.462714 | orchestrator | 2026-01-05 03:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:03.510712 | orchestrator | 2026-01-05 03:47:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:03.514429 | orchestrator | 2026-01-05 03:47:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:03.514483 | orchestrator | 2026-01-05 03:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:06.558807 | orchestrator | 2026-01-05 03:47:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:06.560321 | orchestrator | 2026-01-05 03:47:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:06.560369 | orchestrator | 2026-01-05 03:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:09.602362 | orchestrator | 2026-01-05 03:47:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:09.603407 | orchestrator | 2026-01-05 03:47:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:09.603439 | orchestrator | 2026-01-05 03:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:12.650372 | orchestrator | 2026-01-05 03:47:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:12.651933 | orchestrator | 2026-01-05 03:47:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:12.652053 | orchestrator | 2026-01-05 03:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:15.700510 | orchestrator | 2026-01-05 03:47:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:15.702366 | orchestrator | 2026-01-05 03:47:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:15.702422 | orchestrator | 2026-01-05 03:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:18.750292 | orchestrator | 2026-01-05 03:47:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:18.751434 | orchestrator | 2026-01-05 03:47:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:18.751513 | orchestrator | 2026-01-05 03:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:21.799843 | orchestrator | 2026-01-05 03:47:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:21.801343 | orchestrator | 2026-01-05 03:47:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:21.801387 | orchestrator | 2026-01-05 03:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:24.849593 | orchestrator | 2026-01-05 03:47:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:24.851774 | orchestrator | 2026-01-05 03:47:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:24.852168 | orchestrator | 2026-01-05 03:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:27.908722 | orchestrator | 2026-01-05 03:47:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:27.909523 | orchestrator | 2026-01-05 03:47:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:27.909575 | orchestrator | 2026-01-05 03:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:30.959194 | orchestrator | 2026-01-05 03:47:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:30.961685 | orchestrator | 2026-01-05 03:47:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:30.961737 | orchestrator | 2026-01-05 03:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:34.016211 | orchestrator | 2026-01-05 03:47:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:34.018074 | orchestrator | 2026-01-05 03:47:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:34.018158 | orchestrator | 2026-01-05 03:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:37.071427 | orchestrator | 2026-01-05 03:47:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:37.074095 | orchestrator | 2026-01-05 03:47:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:37.074315 | orchestrator | 2026-01-05 03:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:40.115195 | orchestrator | 2026-01-05 03:47:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:40.117239 | orchestrator | 2026-01-05 03:47:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:40.117291 | orchestrator | 2026-01-05 03:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:43.173108 | orchestrator | 2026-01-05 03:47:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:43.175580 | orchestrator | 2026-01-05 03:47:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:43.175651 | orchestrator | 2026-01-05 03:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:46.227171 | orchestrator | 2026-01-05 03:47:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:46.230288 | orchestrator | 2026-01-05 03:47:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:46.230441 | orchestrator | 2026-01-05 03:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:49.278416 | orchestrator | 2026-01-05 03:47:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:49.279559 | orchestrator | 2026-01-05 03:47:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:49.279714 | orchestrator | 2026-01-05 03:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:52.329138 | orchestrator | 2026-01-05 03:47:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:52.331605 | orchestrator | 2026-01-05 03:47:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:52.331752 | orchestrator | 2026-01-05 03:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:55.378274 | orchestrator | 2026-01-05 03:47:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:55.380147 | orchestrator | 2026-01-05 03:47:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:55.380203 | orchestrator | 2026-01-05 03:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:47:58.431660 | orchestrator | 2026-01-05 03:47:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:47:58.434156 | orchestrator | 2026-01-05 03:47:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:47:58.434280 | orchestrator | 2026-01-05 03:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:01.479818 | orchestrator | 2026-01-05 03:48:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:01.482463 | orchestrator | 2026-01-05 03:48:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:01.482513 | orchestrator | 2026-01-05 03:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:04.526667 | orchestrator | 2026-01-05 03:48:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:04.528937 | orchestrator | 2026-01-05 03:48:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:04.528993 | orchestrator | 2026-01-05 03:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:07.579363 | orchestrator | 2026-01-05 03:48:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:07.579541 | orchestrator | 2026-01-05 03:48:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:07.579563 | orchestrator | 2026-01-05 03:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:10.632764 | orchestrator | 2026-01-05 03:48:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:10.635511 | orchestrator | 2026-01-05 03:48:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:10.635644 | orchestrator | 2026-01-05 03:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:13.672922 | orchestrator | 2026-01-05 03:48:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:13.674937 | orchestrator | 2026-01-05 03:48:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:13.674967 | orchestrator | 2026-01-05 03:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:16.722278 | orchestrator | 2026-01-05 03:48:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:16.723820 | orchestrator | 2026-01-05 03:48:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:16.723879 | orchestrator | 2026-01-05 03:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:19.771700 | orchestrator | 2026-01-05 03:48:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:19.771937 | orchestrator | 2026-01-05 03:48:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:19.771968 | orchestrator | 2026-01-05 03:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:22.825447 | orchestrator | 2026-01-05 03:48:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:22.827375 | orchestrator | 2026-01-05 03:48:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:22.827450 | orchestrator | 2026-01-05 03:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:25.876618 | orchestrator | 2026-01-05 03:48:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:25.877879 | orchestrator | 2026-01-05 03:48:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:25.877918 | orchestrator | 2026-01-05 03:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:28.931204 | orchestrator | 2026-01-05 03:48:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:28.933344 | orchestrator | 2026-01-05 03:48:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:28.933384 | orchestrator | 2026-01-05 03:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:31.981312 | orchestrator | 2026-01-05 03:48:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:31.983368 | orchestrator | 2026-01-05 03:48:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:31.983425 | orchestrator | 2026-01-05 03:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:35.035149 | orchestrator | 2026-01-05 03:48:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:35.038120 | orchestrator | 2026-01-05 03:48:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:35.038201 | orchestrator | 2026-01-05 03:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:38.086866 | orchestrator | 2026-01-05 03:48:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:38.087013 | orchestrator | 2026-01-05 03:48:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:38.087030 | orchestrator | 2026-01-05 03:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:41.130438 | orchestrator | 2026-01-05 03:48:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:41.132308 | orchestrator | 2026-01-05 03:48:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:41.132365 | orchestrator | 2026-01-05 03:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:44.178162 | orchestrator | 2026-01-05 03:48:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:44.180190 | orchestrator | 2026-01-05 03:48:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:44.180225 | orchestrator | 2026-01-05 03:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:47.229947 | orchestrator | 2026-01-05 03:48:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:47.231300 | orchestrator | 2026-01-05 03:48:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:47.231328 | orchestrator | 2026-01-05 03:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:50.283824 | orchestrator | 2026-01-05 03:48:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:50.285851 | orchestrator | 2026-01-05 03:48:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:50.286167 | orchestrator | 2026-01-05 03:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:53.339272 | orchestrator | 2026-01-05 03:48:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:53.341250 | orchestrator | 2026-01-05 03:48:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:53.341328 | orchestrator | 2026-01-05 03:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:56.388672 | orchestrator | 2026-01-05 03:48:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:56.390209 | orchestrator | 2026-01-05 03:48:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:56.390268 | orchestrator | 2026-01-05 03:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:48:59.443407 | orchestrator | 2026-01-05 03:48:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:48:59.445596 | orchestrator | 2026-01-05 03:48:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:48:59.445761 | orchestrator | 2026-01-05 03:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:02.502929 | orchestrator | 2026-01-05 03:49:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:02.505207 | orchestrator | 2026-01-05 03:49:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:02.505272 | orchestrator | 2026-01-05 03:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:05.552888 | orchestrator | 2026-01-05 03:49:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:05.554898 | orchestrator | 2026-01-05 03:49:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:05.554946 | orchestrator | 2026-01-05 03:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:08.603867 | orchestrator | 2026-01-05 03:49:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:08.604925 | orchestrator | 2026-01-05 03:49:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:08.604976 | orchestrator | 2026-01-05 03:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:11.650311 | orchestrator | 2026-01-05 03:49:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:11.653023 | orchestrator | 2026-01-05 03:49:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:11.653082 | orchestrator | 2026-01-05 03:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:14.697501 | orchestrator | 2026-01-05 03:49:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:14.700091 | orchestrator | 2026-01-05 03:49:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:14.700190 | orchestrator | 2026-01-05 03:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:17.744404 | orchestrator | 2026-01-05 03:49:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:17.746392 | orchestrator | 2026-01-05 03:49:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:17.746465 | orchestrator | 2026-01-05 03:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:20.792860 | orchestrator | 2026-01-05 03:49:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:20.795214 | orchestrator | 2026-01-05 03:49:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:20.795307 | orchestrator | 2026-01-05 03:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:23.839614 | orchestrator | 2026-01-05 03:49:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:23.842520 | orchestrator | 2026-01-05 03:49:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:23.842578 | orchestrator | 2026-01-05 03:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:26.890942 | orchestrator | 2026-01-05 03:49:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:26.893332 | orchestrator | 2026-01-05 03:49:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:26.893399 | orchestrator | 2026-01-05 03:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:29.945734 | orchestrator | 2026-01-05 03:49:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:29.947692 | orchestrator | 2026-01-05 03:49:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:29.947746 | orchestrator | 2026-01-05 03:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:33.001432 | orchestrator | 2026-01-05 03:49:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:33.006403 | orchestrator | 2026-01-05 03:49:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:33.006516 | orchestrator | 2026-01-05 03:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:36.058273 | orchestrator | 2026-01-05 03:49:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:36.059409 | orchestrator | 2026-01-05 03:49:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:36.059670 | orchestrator | 2026-01-05 03:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:39.098599 | orchestrator | 2026-01-05 03:49:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:39.100085 | orchestrator | 2026-01-05 03:49:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:39.100496 | orchestrator | 2026-01-05 03:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:42.151708 | orchestrator | 2026-01-05 03:49:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:42.153068 | orchestrator | 2026-01-05 03:49:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:42.153208 | orchestrator | 2026-01-05 03:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:45.200450 | orchestrator | 2026-01-05 03:49:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:45.201961 | orchestrator | 2026-01-05 03:49:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:45.202094 | orchestrator | 2026-01-05 03:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:48.244786 | orchestrator | 2026-01-05 03:49:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:48.245113 | orchestrator | 2026-01-05 03:49:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:48.245138 | orchestrator | 2026-01-05 03:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:51.290356 | orchestrator | 2026-01-05 03:49:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:51.290944 | orchestrator | 2026-01-05 03:49:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:51.290972 | orchestrator | 2026-01-05 03:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:54.328787 | orchestrator | 2026-01-05 03:49:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:54.331126 | orchestrator | 2026-01-05 03:49:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:54.331175 | orchestrator | 2026-01-05 03:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:49:57.373400 | orchestrator | 2026-01-05 03:49:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:49:57.374941 | orchestrator | 2026-01-05 03:49:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:49:57.374975 | orchestrator | 2026-01-05 03:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:00.424938 | orchestrator | 2026-01-05 03:50:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:00.426301 | orchestrator | 2026-01-05 03:50:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:00.426339 | orchestrator | 2026-01-05 03:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:03.475354 | orchestrator | 2026-01-05 03:50:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:03.477813 | orchestrator | 2026-01-05 03:50:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:03.477939 | orchestrator | 2026-01-05 03:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:06.524670 | orchestrator | 2026-01-05 03:50:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:06.526118 | orchestrator | 2026-01-05 03:50:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:06.526154 | orchestrator | 2026-01-05 03:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:09.571152 | orchestrator | 2026-01-05 03:50:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:09.572073 | orchestrator | 2026-01-05 03:50:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:09.572133 | orchestrator | 2026-01-05 03:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:12.622670 | orchestrator | 2026-01-05 03:50:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:12.624169 | orchestrator | 2026-01-05 03:50:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:12.624312 | orchestrator | 2026-01-05 03:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:15.675610 | orchestrator | 2026-01-05 03:50:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:15.677140 | orchestrator | 2026-01-05 03:50:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:15.677195 | orchestrator | 2026-01-05 03:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:18.727547 | orchestrator | 2026-01-05 03:50:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:18.728896 | orchestrator | 2026-01-05 03:50:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:18.728940 | orchestrator | 2026-01-05 03:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:21.780430 | orchestrator | 2026-01-05 03:50:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:21.781002 | orchestrator | 2026-01-05 03:50:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:21.781039 | orchestrator | 2026-01-05 03:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:24.832297 | orchestrator | 2026-01-05 03:50:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:24.835956 | orchestrator | 2026-01-05 03:50:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:24.836033 | orchestrator | 2026-01-05 03:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:27.885421 | orchestrator | 2026-01-05 03:50:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:27.887635 | orchestrator | 2026-01-05 03:50:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:27.887670 | orchestrator | 2026-01-05 03:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:30.944953 | orchestrator | 2026-01-05 03:50:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:30.946909 | orchestrator | 2026-01-05 03:50:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:30.947085 | orchestrator | 2026-01-05 03:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:33.993821 | orchestrator | 2026-01-05 03:50:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:33.995947 | orchestrator | 2026-01-05 03:50:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:33.996727 | orchestrator | 2026-01-05 03:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:37.042951 | orchestrator | 2026-01-05 03:50:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:37.045593 | orchestrator | 2026-01-05 03:50:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:37.045667 | orchestrator | 2026-01-05 03:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:40.095221 | orchestrator | 2026-01-05 03:50:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:40.097706 | orchestrator | 2026-01-05 03:50:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:40.097792 | orchestrator | 2026-01-05 03:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:43.139218 | orchestrator | 2026-01-05 03:50:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:43.140696 | orchestrator | 2026-01-05 03:50:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:43.140753 | orchestrator | 2026-01-05 03:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:46.194752 | orchestrator | 2026-01-05 03:50:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:46.196534 | orchestrator | 2026-01-05 03:50:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:46.196615 | orchestrator | 2026-01-05 03:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:49.243562 | orchestrator | 2026-01-05 03:50:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:49.246305 | orchestrator | 2026-01-05 03:50:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:49.246367 | orchestrator | 2026-01-05 03:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:52.295406 | orchestrator | 2026-01-05 03:50:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:52.297755 | orchestrator | 2026-01-05 03:50:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:52.297916 | orchestrator | 2026-01-05 03:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:55.345049 | orchestrator | 2026-01-05 03:50:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:55.345777 | orchestrator | 2026-01-05 03:50:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:55.346502 | orchestrator | 2026-01-05 03:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:50:58.395957 | orchestrator | 2026-01-05 03:50:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:50:58.398227 | orchestrator | 2026-01-05 03:50:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:50:58.398528 | orchestrator | 2026-01-05 03:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:01.450499 | orchestrator | 2026-01-05 03:51:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:01.451908 | orchestrator | 2026-01-05 03:51:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:01.451951 | orchestrator | 2026-01-05 03:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:04.500911 | orchestrator | 2026-01-05 03:51:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:04.503113 | orchestrator | 2026-01-05 03:51:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:04.503242 | orchestrator | 2026-01-05 03:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:07.551981 | orchestrator | 2026-01-05 03:51:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:07.554274 | orchestrator | 2026-01-05 03:51:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:07.554381 | orchestrator | 2026-01-05 03:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:10.604574 | orchestrator | 2026-01-05 03:51:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:10.606548 | orchestrator | 2026-01-05 03:51:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:10.606595 | orchestrator | 2026-01-05 03:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:13.654919 | orchestrator | 2026-01-05 03:51:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:13.655980 | orchestrator | 2026-01-05 03:51:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:13.656033 | orchestrator | 2026-01-05 03:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:16.703561 | orchestrator | 2026-01-05 03:51:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:16.706809 | orchestrator | 2026-01-05 03:51:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:16.706887 | orchestrator | 2026-01-05 03:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:19.759072 | orchestrator | 2026-01-05 03:51:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:19.760908 | orchestrator | 2026-01-05 03:51:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:19.761002 | orchestrator | 2026-01-05 03:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:22.811060 | orchestrator | 2026-01-05 03:51:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:22.813800 | orchestrator | 2026-01-05 03:51:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:22.813872 | orchestrator | 2026-01-05 03:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:25.860132 | orchestrator | 2026-01-05 03:51:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:25.861675 | orchestrator | 2026-01-05 03:51:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:25.861823 | orchestrator | 2026-01-05 03:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:28.907631 | orchestrator | 2026-01-05 03:51:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:28.909583 | orchestrator | 2026-01-05 03:51:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:28.909659 | orchestrator | 2026-01-05 03:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:31.958577 | orchestrator | 2026-01-05 03:51:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:31.960241 | orchestrator | 2026-01-05 03:51:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:31.960333 | orchestrator | 2026-01-05 03:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:35.003710 | orchestrator | 2026-01-05 03:51:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:35.005868 | orchestrator | 2026-01-05 03:51:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:35.006126 | orchestrator | 2026-01-05 03:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:38.057467 | orchestrator | 2026-01-05 03:51:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:38.058704 | orchestrator | 2026-01-05 03:51:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:38.058849 | orchestrator | 2026-01-05 03:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:41.100940 | orchestrator | 2026-01-05 03:51:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:41.102331 | orchestrator | 2026-01-05 03:51:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:41.102562 | orchestrator | 2026-01-05 03:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:44.146709 | orchestrator | 2026-01-05 03:51:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:44.149789 | orchestrator | 2026-01-05 03:51:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:44.149842 | orchestrator | 2026-01-05 03:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:47.203595 | orchestrator | 2026-01-05 03:51:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:47.206111 | orchestrator | 2026-01-05 03:51:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:47.206144 | orchestrator | 2026-01-05 03:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:50.254746 | orchestrator | 2026-01-05 03:51:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:50.256190 | orchestrator | 2026-01-05 03:51:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:50.256245 | orchestrator | 2026-01-05 03:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:53.304595 | orchestrator | 2026-01-05 03:51:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:53.307286 | orchestrator | 2026-01-05 03:51:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:53.307342 | orchestrator | 2026-01-05 03:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:56.354338 | orchestrator | 2026-01-05 03:51:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:56.356008 | orchestrator | 2026-01-05 03:51:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:56.356110 | orchestrator | 2026-01-05 03:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:51:59.407236 | orchestrator | 2026-01-05 03:51:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:51:59.409470 | orchestrator | 2026-01-05 03:51:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:51:59.409528 | orchestrator | 2026-01-05 03:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:02.457222 | orchestrator | 2026-01-05 03:52:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:02.458727 | orchestrator | 2026-01-05 03:52:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:02.458766 | orchestrator | 2026-01-05 03:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:05.509082 | orchestrator | 2026-01-05 03:52:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:05.511573 | orchestrator | 2026-01-05 03:52:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:05.511627 | orchestrator | 2026-01-05 03:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:08.559876 | orchestrator | 2026-01-05 03:52:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:08.561136 | orchestrator | 2026-01-05 03:52:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:08.561177 | orchestrator | 2026-01-05 03:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:11.609050 | orchestrator | 2026-01-05 03:52:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:11.610938 | orchestrator | 2026-01-05 03:52:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:11.610984 | orchestrator | 2026-01-05 03:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:14.660044 | orchestrator | 2026-01-05 03:52:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:14.661472 | orchestrator | 2026-01-05 03:52:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:14.661629 | orchestrator | 2026-01-05 03:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:17.715638 | orchestrator | 2026-01-05 03:52:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:17.717919 | orchestrator | 2026-01-05 03:52:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:17.717992 | orchestrator | 2026-01-05 03:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:20.772172 | orchestrator | 2026-01-05 03:52:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:20.773595 | orchestrator | 2026-01-05 03:52:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:20.773716 | orchestrator | 2026-01-05 03:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:23.826495 | orchestrator | 2026-01-05 03:52:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:23.828187 | orchestrator | 2026-01-05 03:52:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:23.828315 | orchestrator | 2026-01-05 03:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:26.870647 | orchestrator | 2026-01-05 03:52:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:26.872510 | orchestrator | 2026-01-05 03:52:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:26.872573 | orchestrator | 2026-01-05 03:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:29.926327 | orchestrator | 2026-01-05 03:52:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:29.929543 | orchestrator | 2026-01-05 03:52:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:29.929625 | orchestrator | 2026-01-05 03:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:32.979634 | orchestrator | 2026-01-05 03:52:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:32.981841 | orchestrator | 2026-01-05 03:52:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:32.981916 | orchestrator | 2026-01-05 03:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:36.031987 | orchestrator | 2026-01-05 03:52:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:36.033722 | orchestrator | 2026-01-05 03:52:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:36.033765 | orchestrator | 2026-01-05 03:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:39.071967 | orchestrator | 2026-01-05 03:52:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:39.072896 | orchestrator | 2026-01-05 03:52:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:39.072928 | orchestrator | 2026-01-05 03:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:42.124155 | orchestrator | 2026-01-05 03:52:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:42.126590 | orchestrator | 2026-01-05 03:52:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:42.126661 | orchestrator | 2026-01-05 03:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:45.181960 | orchestrator | 2026-01-05 03:52:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:45.184059 | orchestrator | 2026-01-05 03:52:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:45.184118 | orchestrator | 2026-01-05 03:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:48.225163 | orchestrator | 2026-01-05 03:52:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:48.226218 | orchestrator | 2026-01-05 03:52:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:48.276340 | orchestrator | 2026-01-05 03:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:51.271317 | orchestrator | 2026-01-05 03:52:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:51.273151 | orchestrator | 2026-01-05 03:52:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:51.273201 | orchestrator | 2026-01-05 03:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:54.320269 | orchestrator | 2026-01-05 03:52:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:54.321706 | orchestrator | 2026-01-05 03:52:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:54.321764 | orchestrator | 2026-01-05 03:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:52:57.372981 | orchestrator | 2026-01-05 03:52:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:52:57.373912 | orchestrator | 2026-01-05 03:52:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:52:57.373942 | orchestrator | 2026-01-05 03:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:00.426873 | orchestrator | 2026-01-05 03:53:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:00.428667 | orchestrator | 2026-01-05 03:53:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:00.428746 | orchestrator | 2026-01-05 03:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:03.475916 | orchestrator | 2026-01-05 03:53:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:03.477832 | orchestrator | 2026-01-05 03:53:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:03.477891 | orchestrator | 2026-01-05 03:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:06.519209 | orchestrator | 2026-01-05 03:53:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:06.521700 | orchestrator | 2026-01-05 03:53:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:06.521783 | orchestrator | 2026-01-05 03:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:09.569087 | orchestrator | 2026-01-05 03:53:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:09.571062 | orchestrator | 2026-01-05 03:53:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:09.571378 | orchestrator | 2026-01-05 03:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:12.616651 | orchestrator | 2026-01-05 03:53:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:12.618422 | orchestrator | 2026-01-05 03:53:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:12.618522 | orchestrator | 2026-01-05 03:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:15.661202 | orchestrator | 2026-01-05 03:53:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:15.663949 | orchestrator | 2026-01-05 03:53:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:15.664022 | orchestrator | 2026-01-05 03:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:18.715054 | orchestrator | 2026-01-05 03:53:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:18.717700 | orchestrator | 2026-01-05 03:53:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:18.717764 | orchestrator | 2026-01-05 03:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:21.763123 | orchestrator | 2026-01-05 03:53:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:21.765325 | orchestrator | 2026-01-05 03:53:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:21.765401 | orchestrator | 2026-01-05 03:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:24.817468 | orchestrator | 2026-01-05 03:53:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:24.821289 | orchestrator | 2026-01-05 03:53:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:24.821393 | orchestrator | 2026-01-05 03:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:27.869151 | orchestrator | 2026-01-05 03:53:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:27.870858 | orchestrator | 2026-01-05 03:53:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:27.870918 | orchestrator | 2026-01-05 03:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:30.920148 | orchestrator | 2026-01-05 03:53:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:30.923497 | orchestrator | 2026-01-05 03:53:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:30.923595 | orchestrator | 2026-01-05 03:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:33.973322 | orchestrator | 2026-01-05 03:53:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:33.974783 | orchestrator | 2026-01-05 03:53:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:33.974824 | orchestrator | 2026-01-05 03:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:37.023280 | orchestrator | 2026-01-05 03:53:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:37.024695 | orchestrator | 2026-01-05 03:53:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:37.024801 | orchestrator | 2026-01-05 03:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:40.071950 | orchestrator | 2026-01-05 03:53:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:40.074393 | orchestrator | 2026-01-05 03:53:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:40.074626 | orchestrator | 2026-01-05 03:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:43.113997 | orchestrator | 2026-01-05 03:53:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:43.115134 | orchestrator | 2026-01-05 03:53:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:43.115180 | orchestrator | 2026-01-05 03:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:46.163817 | orchestrator | 2026-01-05 03:53:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:46.165914 | orchestrator | 2026-01-05 03:53:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:46.165950 | orchestrator | 2026-01-05 03:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:49.210329 | orchestrator | 2026-01-05 03:53:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:49.211683 | orchestrator | 2026-01-05 03:53:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:49.211717 | orchestrator | 2026-01-05 03:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:52.271470 | orchestrator | 2026-01-05 03:53:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:52.272774 | orchestrator | 2026-01-05 03:53:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:52.272805 | orchestrator | 2026-01-05 03:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:55.326418 | orchestrator | 2026-01-05 03:53:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:55.328810 | orchestrator | 2026-01-05 03:53:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:55.328855 | orchestrator | 2026-01-05 03:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:53:58.382755 | orchestrator | 2026-01-05 03:53:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:53:58.384438 | orchestrator | 2026-01-05 03:53:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:53:58.384718 | orchestrator | 2026-01-05 03:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:01.438098 | orchestrator | 2026-01-05 03:54:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:01.440454 | orchestrator | 2026-01-05 03:54:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:01.440774 | orchestrator | 2026-01-05 03:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:04.497808 | orchestrator | 2026-01-05 03:54:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:04.501046 | orchestrator | 2026-01-05 03:54:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:04.501496 | orchestrator | 2026-01-05 03:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:07.548687 | orchestrator | 2026-01-05 03:54:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:07.550198 | orchestrator | 2026-01-05 03:54:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:07.550296 | orchestrator | 2026-01-05 03:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:10.599296 | orchestrator | 2026-01-05 03:54:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:10.601320 | orchestrator | 2026-01-05 03:54:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:10.601371 | orchestrator | 2026-01-05 03:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:13.646261 | orchestrator | 2026-01-05 03:54:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:13.647840 | orchestrator | 2026-01-05 03:54:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:13.647979 | orchestrator | 2026-01-05 03:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:16.698619 | orchestrator | 2026-01-05 03:54:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:16.700019 | orchestrator | 2026-01-05 03:54:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:16.700101 | orchestrator | 2026-01-05 03:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:19.746096 | orchestrator | 2026-01-05 03:54:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:19.748570 | orchestrator | 2026-01-05 03:54:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:19.748668 | orchestrator | 2026-01-05 03:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:22.794383 | orchestrator | 2026-01-05 03:54:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:22.796000 | orchestrator | 2026-01-05 03:54:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:22.796302 | orchestrator | 2026-01-05 03:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:25.847100 | orchestrator | 2026-01-05 03:54:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:25.848346 | orchestrator | 2026-01-05 03:54:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:25.848867 | orchestrator | 2026-01-05 03:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:28.892055 | orchestrator | 2026-01-05 03:54:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:28.895195 | orchestrator | 2026-01-05 03:54:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:28.895339 | orchestrator | 2026-01-05 03:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:31.944560 | orchestrator | 2026-01-05 03:54:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:31.945404 | orchestrator | 2026-01-05 03:54:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:31.945447 | orchestrator | 2026-01-05 03:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:34.996373 | orchestrator | 2026-01-05 03:54:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:34.998858 | orchestrator | 2026-01-05 03:54:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:34.998959 | orchestrator | 2026-01-05 03:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:38.050090 | orchestrator | 2026-01-05 03:54:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:38.050196 | orchestrator | 2026-01-05 03:54:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:38.050211 | orchestrator | 2026-01-05 03:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:41.095018 | orchestrator | 2026-01-05 03:54:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:41.096788 | orchestrator | 2026-01-05 03:54:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:41.096858 | orchestrator | 2026-01-05 03:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:44.137207 | orchestrator | 2026-01-05 03:54:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:44.139580 | orchestrator | 2026-01-05 03:54:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:44.139720 | orchestrator | 2026-01-05 03:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:47.185139 | orchestrator | 2026-01-05 03:54:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:47.187208 | orchestrator | 2026-01-05 03:54:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:47.187254 | orchestrator | 2026-01-05 03:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:50.239752 | orchestrator | 2026-01-05 03:54:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:50.241650 | orchestrator | 2026-01-05 03:54:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:50.241697 | orchestrator | 2026-01-05 03:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:53.284567 | orchestrator | 2026-01-05 03:54:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:53.288086 | orchestrator | 2026-01-05 03:54:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:53.288154 | orchestrator | 2026-01-05 03:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:56.334932 | orchestrator | 2026-01-05 03:54:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:56.337269 | orchestrator | 2026-01-05 03:54:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:56.337347 | orchestrator | 2026-01-05 03:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:54:59.380978 | orchestrator | 2026-01-05 03:54:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:54:59.381946 | orchestrator | 2026-01-05 03:54:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:54:59.382115 | orchestrator | 2026-01-05 03:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:02.421184 | orchestrator | 2026-01-05 03:55:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:02.423851 | orchestrator | 2026-01-05 03:55:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:02.423904 | orchestrator | 2026-01-05 03:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:05.479041 | orchestrator | 2026-01-05 03:55:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:05.480753 | orchestrator | 2026-01-05 03:55:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:05.480778 | orchestrator | 2026-01-05 03:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:08.530798 | orchestrator | 2026-01-05 03:55:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:08.534370 | orchestrator | 2026-01-05 03:55:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:08.534498 | orchestrator | 2026-01-05 03:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:11.581526 | orchestrator | 2026-01-05 03:55:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:11.583591 | orchestrator | 2026-01-05 03:55:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:11.583712 | orchestrator | 2026-01-05 03:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:14.643534 | orchestrator | 2026-01-05 03:55:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:14.643610 | orchestrator | 2026-01-05 03:55:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:14.643618 | orchestrator | 2026-01-05 03:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:17.702703 | orchestrator | 2026-01-05 03:55:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:17.706428 | orchestrator | 2026-01-05 03:55:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:17.706512 | orchestrator | 2026-01-05 03:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:20.759858 | orchestrator | 2026-01-05 03:55:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:20.762325 | orchestrator | 2026-01-05 03:55:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:20.762405 | orchestrator | 2026-01-05 03:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:23.805375 | orchestrator | 2026-01-05 03:55:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:23.807334 | orchestrator | 2026-01-05 03:55:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:23.807394 | orchestrator | 2026-01-05 03:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:26.860342 | orchestrator | 2026-01-05 03:55:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:26.863610 | orchestrator | 2026-01-05 03:55:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:26.864312 | orchestrator | 2026-01-05 03:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:29.920116 | orchestrator | 2026-01-05 03:55:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:29.922069 | orchestrator | 2026-01-05 03:55:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:29.922114 | orchestrator | 2026-01-05 03:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:32.969419 | orchestrator | 2026-01-05 03:55:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:32.970596 | orchestrator | 2026-01-05 03:55:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:32.970825 | orchestrator | 2026-01-05 03:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:36.023064 | orchestrator | 2026-01-05 03:55:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:36.025428 | orchestrator | 2026-01-05 03:55:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:36.025493 | orchestrator | 2026-01-05 03:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:39.079454 | orchestrator | 2026-01-05 03:55:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:39.080137 | orchestrator | 2026-01-05 03:55:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:39.080167 | orchestrator | 2026-01-05 03:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:42.125052 | orchestrator | 2026-01-05 03:55:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:42.128457 | orchestrator | 2026-01-05 03:55:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:42.128507 | orchestrator | 2026-01-05 03:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:45.180456 | orchestrator | 2026-01-05 03:55:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:45.181505 | orchestrator | 2026-01-05 03:55:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:45.181561 | orchestrator | 2026-01-05 03:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:48.230861 | orchestrator | 2026-01-05 03:55:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:48.233485 | orchestrator | 2026-01-05 03:55:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:48.233519 | orchestrator | 2026-01-05 03:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:51.282086 | orchestrator | 2026-01-05 03:55:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:51.284632 | orchestrator | 2026-01-05 03:55:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:51.284841 | orchestrator | 2026-01-05 03:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:54.335115 | orchestrator | 2026-01-05 03:55:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:54.337028 | orchestrator | 2026-01-05 03:55:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:54.337238 | orchestrator | 2026-01-05 03:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:55:57.385647 | orchestrator | 2026-01-05 03:55:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:55:57.386961 | orchestrator | 2026-01-05 03:55:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:55:57.387000 | orchestrator | 2026-01-05 03:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:00.429096 | orchestrator | 2026-01-05 03:56:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:00.430199 | orchestrator | 2026-01-05 03:56:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:00.430279 | orchestrator | 2026-01-05 03:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:03.482457 | orchestrator | 2026-01-05 03:56:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:03.484690 | orchestrator | 2026-01-05 03:56:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:03.484899 | orchestrator | 2026-01-05 03:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:06.535662 | orchestrator | 2026-01-05 03:56:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:06.537570 | orchestrator | 2026-01-05 03:56:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:06.537619 | orchestrator | 2026-01-05 03:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:09.590381 | orchestrator | 2026-01-05 03:56:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:09.592507 | orchestrator | 2026-01-05 03:56:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:09.592565 | orchestrator | 2026-01-05 03:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:12.645202 | orchestrator | 2026-01-05 03:56:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:12.646708 | orchestrator | 2026-01-05 03:56:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:12.646808 | orchestrator | 2026-01-05 03:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:15.694403 | orchestrator | 2026-01-05 03:56:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:15.696215 | orchestrator | 2026-01-05 03:56:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:15.696424 | orchestrator | 2026-01-05 03:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:18.745075 | orchestrator | 2026-01-05 03:56:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:18.746540 | orchestrator | 2026-01-05 03:56:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:18.746624 | orchestrator | 2026-01-05 03:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:21.793159 | orchestrator | 2026-01-05 03:56:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:21.795683 | orchestrator | 2026-01-05 03:56:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:21.795753 | orchestrator | 2026-01-05 03:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:24.837820 | orchestrator | 2026-01-05 03:56:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:24.839801 | orchestrator | 2026-01-05 03:56:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:24.839835 | orchestrator | 2026-01-05 03:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:27.887322 | orchestrator | 2026-01-05 03:56:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:27.888965 | orchestrator | 2026-01-05 03:56:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:27.889033 | orchestrator | 2026-01-05 03:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:30.941698 | orchestrator | 2026-01-05 03:56:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:30.943595 | orchestrator | 2026-01-05 03:56:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:30.943856 | orchestrator | 2026-01-05 03:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:33.999683 | orchestrator | 2026-01-05 03:56:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:34.002321 | orchestrator | 2026-01-05 03:56:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:34.002396 | orchestrator | 2026-01-05 03:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:37.054297 | orchestrator | 2026-01-05 03:56:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:37.058104 | orchestrator | 2026-01-05 03:56:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:37.058207 | orchestrator | 2026-01-05 03:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:40.101894 | orchestrator | 2026-01-05 03:56:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:40.103610 | orchestrator | 2026-01-05 03:56:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:40.103654 | orchestrator | 2026-01-05 03:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:43.150842 | orchestrator | 2026-01-05 03:56:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:43.153018 | orchestrator | 2026-01-05 03:56:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:43.153098 | orchestrator | 2026-01-05 03:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:46.212459 | orchestrator | 2026-01-05 03:56:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:46.214500 | orchestrator | 2026-01-05 03:56:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:46.214590 | orchestrator | 2026-01-05 03:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:49.268914 | orchestrator | 2026-01-05 03:56:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:49.271538 | orchestrator | 2026-01-05 03:56:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:49.271589 | orchestrator | 2026-01-05 03:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:52.321338 | orchestrator | 2026-01-05 03:56:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:52.323223 | orchestrator | 2026-01-05 03:56:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:52.323293 | orchestrator | 2026-01-05 03:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:55.368500 | orchestrator | 2026-01-05 03:56:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:55.370737 | orchestrator | 2026-01-05 03:56:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:55.370922 | orchestrator | 2026-01-05 03:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:56:58.422238 | orchestrator | 2026-01-05 03:56:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:56:58.424136 | orchestrator | 2026-01-05 03:56:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:56:58.424170 | orchestrator | 2026-01-05 03:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:01.476721 | orchestrator | 2026-01-05 03:57:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:01.477699 | orchestrator | 2026-01-05 03:57:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:01.477738 | orchestrator | 2026-01-05 03:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:04.531933 | orchestrator | 2026-01-05 03:57:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:04.536910 | orchestrator | 2026-01-05 03:57:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:04.536993 | orchestrator | 2026-01-05 03:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:07.585262 | orchestrator | 2026-01-05 03:57:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:07.587586 | orchestrator | 2026-01-05 03:57:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:07.587659 | orchestrator | 2026-01-05 03:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:10.634613 | orchestrator | 2026-01-05 03:57:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:10.636643 | orchestrator | 2026-01-05 03:57:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:10.636698 | orchestrator | 2026-01-05 03:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:13.681731 | orchestrator | 2026-01-05 03:57:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:13.683724 | orchestrator | 2026-01-05 03:57:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:13.683786 | orchestrator | 2026-01-05 03:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:16.732841 | orchestrator | 2026-01-05 03:57:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:16.734867 | orchestrator | 2026-01-05 03:57:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:16.734925 | orchestrator | 2026-01-05 03:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:19.799333 | orchestrator | 2026-01-05 03:57:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:19.802316 | orchestrator | 2026-01-05 03:57:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:19.802389 | orchestrator | 2026-01-05 03:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:22.857678 | orchestrator | 2026-01-05 03:57:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:22.860092 | orchestrator | 2026-01-05 03:57:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:22.860200 | orchestrator | 2026-01-05 03:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:25.915704 | orchestrator | 2026-01-05 03:57:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:25.917614 | orchestrator | 2026-01-05 03:57:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:25.917648 | orchestrator | 2026-01-05 03:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:28.973242 | orchestrator | 2026-01-05 03:57:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:28.974475 | orchestrator | 2026-01-05 03:57:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:28.974546 | orchestrator | 2026-01-05 03:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:32.027877 | orchestrator | 2026-01-05 03:57:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:32.030664 | orchestrator | 2026-01-05 03:57:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:32.030762 | orchestrator | 2026-01-05 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:35.082102 | orchestrator | 2026-01-05 03:57:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:35.083698 | orchestrator | 2026-01-05 03:57:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:35.083732 | orchestrator | 2026-01-05 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:38.132622 | orchestrator | 2026-01-05 03:57:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:38.133861 | orchestrator | 2026-01-05 03:57:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:38.133921 | orchestrator | 2026-01-05 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:41.189016 | orchestrator | 2026-01-05 03:57:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:41.193518 | orchestrator | 2026-01-05 03:57:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:41.193639 | orchestrator | 2026-01-05 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:44.244322 | orchestrator | 2026-01-05 03:57:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:44.246331 | orchestrator | 2026-01-05 03:57:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:44.246401 | orchestrator | 2026-01-05 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:47.299558 | orchestrator | 2026-01-05 03:57:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:47.301313 | orchestrator | 2026-01-05 03:57:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:47.301370 | orchestrator | 2026-01-05 03:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:50.349999 | orchestrator | 2026-01-05 03:57:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:50.351220 | orchestrator | 2026-01-05 03:57:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:50.351322 | orchestrator | 2026-01-05 03:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:53.398345 | orchestrator | 2026-01-05 03:57:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:53.399754 | orchestrator | 2026-01-05 03:57:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:53.399848 | orchestrator | 2026-01-05 03:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:56.450374 | orchestrator | 2026-01-05 03:57:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:56.451047 | orchestrator | 2026-01-05 03:57:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:56.451071 | orchestrator | 2026-01-05 03:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:57:59.498538 | orchestrator | 2026-01-05 03:57:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:57:59.499963 | orchestrator | 2026-01-05 03:57:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:57:59.500011 | orchestrator | 2026-01-05 03:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:02.539968 | orchestrator | 2026-01-05 03:58:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:02.541723 | orchestrator | 2026-01-05 03:58:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:02.541772 | orchestrator | 2026-01-05 03:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:05.594086 | orchestrator | 2026-01-05 03:58:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:05.595168 | orchestrator | 2026-01-05 03:58:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:05.595267 | orchestrator | 2026-01-05 03:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:08.640320 | orchestrator | 2026-01-05 03:58:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:08.642184 | orchestrator | 2026-01-05 03:58:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:08.642239 | orchestrator | 2026-01-05 03:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:11.691436 | orchestrator | 2026-01-05 03:58:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:11.693449 | orchestrator | 2026-01-05 03:58:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:11.693550 | orchestrator | 2026-01-05 03:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:14.741278 | orchestrator | 2026-01-05 03:58:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:14.743215 | orchestrator | 2026-01-05 03:58:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:14.743352 | orchestrator | 2026-01-05 03:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:17.796698 | orchestrator | 2026-01-05 03:58:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:17.799342 | orchestrator | 2026-01-05 03:58:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:17.799412 | orchestrator | 2026-01-05 03:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:20.856463 | orchestrator | 2026-01-05 03:58:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:20.858562 | orchestrator | 2026-01-05 03:58:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:20.858609 | orchestrator | 2026-01-05 03:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:23.908448 | orchestrator | 2026-01-05 03:58:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:23.909763 | orchestrator | 2026-01-05 03:58:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:23.909822 | orchestrator | 2026-01-05 03:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:26.960263 | orchestrator | 2026-01-05 03:58:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:26.962184 | orchestrator | 2026-01-05 03:58:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:26.962216 | orchestrator | 2026-01-05 03:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:30.025247 | orchestrator | 2026-01-05 03:58:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:30.026713 | orchestrator | 2026-01-05 03:58:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:30.026769 | orchestrator | 2026-01-05 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:33.076202 | orchestrator | 2026-01-05 03:58:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:33.078340 | orchestrator | 2026-01-05 03:58:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:33.078388 | orchestrator | 2026-01-05 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:36.123870 | orchestrator | 2026-01-05 03:58:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:36.125358 | orchestrator | 2026-01-05 03:58:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:36.125415 | orchestrator | 2026-01-05 03:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:39.172680 | orchestrator | 2026-01-05 03:58:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:39.174834 | orchestrator | 2026-01-05 03:58:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:39.174944 | orchestrator | 2026-01-05 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:42.226562 | orchestrator | 2026-01-05 03:58:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:42.226691 | orchestrator | 2026-01-05 03:58:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:42.226698 | orchestrator | 2026-01-05 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:45.273970 | orchestrator | 2026-01-05 03:58:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:45.275024 | orchestrator | 2026-01-05 03:58:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:45.275072 | orchestrator | 2026-01-05 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:48.328579 | orchestrator | 2026-01-05 03:58:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:48.331212 | orchestrator | 2026-01-05 03:58:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:48.331248 | orchestrator | 2026-01-05 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:51.382127 | orchestrator | 2026-01-05 03:58:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:51.384314 | orchestrator | 2026-01-05 03:58:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:51.384427 | orchestrator | 2026-01-05 03:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:54.436057 | orchestrator | 2026-01-05 03:58:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:54.438796 | orchestrator | 2026-01-05 03:58:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:54.438979 | orchestrator | 2026-01-05 03:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:58:57.482208 | orchestrator | 2026-01-05 03:58:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:58:57.484151 | orchestrator | 2026-01-05 03:58:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:58:57.484223 | orchestrator | 2026-01-05 03:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:00.537810 | orchestrator | 2026-01-05 03:59:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:00.539228 | orchestrator | 2026-01-05 03:59:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:00.539281 | orchestrator | 2026-01-05 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:03.585705 | orchestrator | 2026-01-05 03:59:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:03.587688 | orchestrator | 2026-01-05 03:59:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:03.587743 | orchestrator | 2026-01-05 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:06.632280 | orchestrator | 2026-01-05 03:59:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:06.634489 | orchestrator | 2026-01-05 03:59:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:06.634613 | orchestrator | 2026-01-05 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:09.680302 | orchestrator | 2026-01-05 03:59:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:09.682422 | orchestrator | 2026-01-05 03:59:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:09.682508 | orchestrator | 2026-01-05 03:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:12.726821 | orchestrator | 2026-01-05 03:59:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:12.728259 | orchestrator | 2026-01-05 03:59:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:12.728494 | orchestrator | 2026-01-05 03:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:15.777633 | orchestrator | 2026-01-05 03:59:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:15.779158 | orchestrator | 2026-01-05 03:59:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:15.779211 | orchestrator | 2026-01-05 03:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:18.832128 | orchestrator | 2026-01-05 03:59:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:18.834838 | orchestrator | 2026-01-05 03:59:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:18.835113 | orchestrator | 2026-01-05 03:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:21.897052 | orchestrator | 2026-01-05 03:59:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:21.898570 | orchestrator | 2026-01-05 03:59:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:21.898610 | orchestrator | 2026-01-05 03:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:24.950259 | orchestrator | 2026-01-05 03:59:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:24.953083 | orchestrator | 2026-01-05 03:59:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:24.953253 | orchestrator | 2026-01-05 03:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:28.011654 | orchestrator | 2026-01-05 03:59:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:28.013564 | orchestrator | 2026-01-05 03:59:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:28.013606 | orchestrator | 2026-01-05 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:31.067590 | orchestrator | 2026-01-05 03:59:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:31.070236 | orchestrator | 2026-01-05 03:59:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:31.070365 | orchestrator | 2026-01-05 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:34.116396 | orchestrator | 2026-01-05 03:59:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:34.117875 | orchestrator | 2026-01-05 03:59:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:34.117973 | orchestrator | 2026-01-05 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:37.175826 | orchestrator | 2026-01-05 03:59:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:37.178227 | orchestrator | 2026-01-05 03:59:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:37.178273 | orchestrator | 2026-01-05 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:40.228233 | orchestrator | 2026-01-05 03:59:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:40.230204 | orchestrator | 2026-01-05 03:59:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:40.230429 | orchestrator | 2026-01-05 03:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:43.279770 | orchestrator | 2026-01-05 03:59:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:43.282390 | orchestrator | 2026-01-05 03:59:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:43.282472 | orchestrator | 2026-01-05 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:46.338599 | orchestrator | 2026-01-05 03:59:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:46.342311 | orchestrator | 2026-01-05 03:59:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:46.342402 | orchestrator | 2026-01-05 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:49.396287 | orchestrator | 2026-01-05 03:59:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:49.401025 | orchestrator | 2026-01-05 03:59:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:49.401167 | orchestrator | 2026-01-05 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:52.453200 | orchestrator | 2026-01-05 03:59:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:52.454884 | orchestrator | 2026-01-05 03:59:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:52.455297 | orchestrator | 2026-01-05 03:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:55.510658 | orchestrator | 2026-01-05 03:59:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:55.515853 | orchestrator | 2026-01-05 03:59:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:55.516018 | orchestrator | 2026-01-05 03:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 03:59:58.556072 | orchestrator | 2026-01-05 03:59:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 03:59:58.556627 | orchestrator | 2026-01-05 03:59:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 03:59:58.556706 | orchestrator | 2026-01-05 03:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:01.605416 | orchestrator | 2026-01-05 04:00:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:01.607097 | orchestrator | 2026-01-05 04:00:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:01.607164 | orchestrator | 2026-01-05 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:04.653018 | orchestrator | 2026-01-05 04:00:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:04.655798 | orchestrator | 2026-01-05 04:00:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:04.655871 | orchestrator | 2026-01-05 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:07.704617 | orchestrator | 2026-01-05 04:00:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:07.706426 | orchestrator | 2026-01-05 04:00:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:07.706473 | orchestrator | 2026-01-05 04:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:10.758696 | orchestrator | 2026-01-05 04:00:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:10.760516 | orchestrator | 2026-01-05 04:00:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:10.760561 | orchestrator | 2026-01-05 04:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:13.808558 | orchestrator | 2026-01-05 04:00:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:13.809645 | orchestrator | 2026-01-05 04:00:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:13.809699 | orchestrator | 2026-01-05 04:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:16.857148 | orchestrator | 2026-01-05 04:00:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:16.859016 | orchestrator | 2026-01-05 04:00:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:16.859057 | orchestrator | 2026-01-05 04:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:19.908315 | orchestrator | 2026-01-05 04:00:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:19.911113 | orchestrator | 2026-01-05 04:00:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:19.911172 | orchestrator | 2026-01-05 04:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:22.955680 | orchestrator | 2026-01-05 04:00:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:22.957609 | orchestrator | 2026-01-05 04:00:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:22.957659 | orchestrator | 2026-01-05 04:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:26.011866 | orchestrator | 2026-01-05 04:00:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:26.013396 | orchestrator | 2026-01-05 04:00:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:26.013476 | orchestrator | 2026-01-05 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:29.059009 | orchestrator | 2026-01-05 04:00:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:29.062590 | orchestrator | 2026-01-05 04:00:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:29.062675 | orchestrator | 2026-01-05 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:32.116524 | orchestrator | 2026-01-05 04:00:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:32.117563 | orchestrator | 2026-01-05 04:00:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:32.117649 | orchestrator | 2026-01-05 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:35.160813 | orchestrator | 2026-01-05 04:00:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:35.162475 | orchestrator | 2026-01-05 04:00:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:35.162510 | orchestrator | 2026-01-05 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:38.204480 | orchestrator | 2026-01-05 04:00:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:38.206807 | orchestrator | 2026-01-05 04:00:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:38.207168 | orchestrator | 2026-01-05 04:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:41.260793 | orchestrator | 2026-01-05 04:00:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:41.264074 | orchestrator | 2026-01-05 04:00:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:41.264164 | orchestrator | 2026-01-05 04:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:44.316721 | orchestrator | 2026-01-05 04:00:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:44.318732 | orchestrator | 2026-01-05 04:00:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:44.318802 | orchestrator | 2026-01-05 04:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:47.366255 | orchestrator | 2026-01-05 04:00:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:47.368415 | orchestrator | 2026-01-05 04:00:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:47.368508 | orchestrator | 2026-01-05 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:50.424969 | orchestrator | 2026-01-05 04:00:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:50.425167 | orchestrator | 2026-01-05 04:00:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:50.425186 | orchestrator | 2026-01-05 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:53.474464 | orchestrator | 2026-01-05 04:00:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:53.477014 | orchestrator | 2026-01-05 04:00:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:53.477068 | orchestrator | 2026-01-05 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:56.536099 | orchestrator | 2026-01-05 04:00:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:56.536954 | orchestrator | 2026-01-05 04:00:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:56.537049 | orchestrator | 2026-01-05 04:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:00:59.584608 | orchestrator | 2026-01-05 04:00:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:00:59.584917 | orchestrator | 2026-01-05 04:00:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:00:59.584980 | orchestrator | 2026-01-05 04:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:02.634338 | orchestrator | 2026-01-05 04:01:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:02.636464 | orchestrator | 2026-01-05 04:01:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:02.636521 | orchestrator | 2026-01-05 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:05.685718 | orchestrator | 2026-01-05 04:01:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:05.687547 | orchestrator | 2026-01-05 04:01:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:05.687613 | orchestrator | 2026-01-05 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:08.741941 | orchestrator | 2026-01-05 04:01:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:08.745277 | orchestrator | 2026-01-05 04:01:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:08.745360 | orchestrator | 2026-01-05 04:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:11.797113 | orchestrator | 2026-01-05 04:01:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:11.798899 | orchestrator | 2026-01-05 04:01:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:11.798974 | orchestrator | 2026-01-05 04:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:14.848718 | orchestrator | 2026-01-05 04:01:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:14.850571 | orchestrator | 2026-01-05 04:01:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:14.850650 | orchestrator | 2026-01-05 04:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:17.898437 | orchestrator | 2026-01-05 04:01:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:17.900439 | orchestrator | 2026-01-05 04:01:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:17.900514 | orchestrator | 2026-01-05 04:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:20.949613 | orchestrator | 2026-01-05 04:01:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:20.951342 | orchestrator | 2026-01-05 04:01:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:20.951567 | orchestrator | 2026-01-05 04:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:23.999964 | orchestrator | 2026-01-05 04:01:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:24.003199 | orchestrator | 2026-01-05 04:01:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:24.004097 | orchestrator | 2026-01-05 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:27.049604 | orchestrator | 2026-01-05 04:01:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:27.051208 | orchestrator | 2026-01-05 04:01:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:27.051242 | orchestrator | 2026-01-05 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:30.094370 | orchestrator | 2026-01-05 04:01:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:30.094905 | orchestrator | 2026-01-05 04:01:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:30.094937 | orchestrator | 2026-01-05 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:33.148105 | orchestrator | 2026-01-05 04:01:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:33.150447 | orchestrator | 2026-01-05 04:01:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:33.150583 | orchestrator | 2026-01-05 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:36.201255 | orchestrator | 2026-01-05 04:01:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:36.203003 | orchestrator | 2026-01-05 04:01:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:36.203092 | orchestrator | 2026-01-05 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:39.252854 | orchestrator | 2026-01-05 04:01:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:39.255146 | orchestrator | 2026-01-05 04:01:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:39.255204 | orchestrator | 2026-01-05 04:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:42.306812 | orchestrator | 2026-01-05 04:01:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:42.308399 | orchestrator | 2026-01-05 04:01:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:42.308438 | orchestrator | 2026-01-05 04:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:45.356496 | orchestrator | 2026-01-05 04:01:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:45.358765 | orchestrator | 2026-01-05 04:01:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:45.358853 | orchestrator | 2026-01-05 04:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:48.410291 | orchestrator | 2026-01-05 04:01:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:48.412167 | orchestrator | 2026-01-05 04:01:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:48.412364 | orchestrator | 2026-01-05 04:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:51.459766 | orchestrator | 2026-01-05 04:01:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:51.461014 | orchestrator | 2026-01-05 04:01:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:51.461130 | orchestrator | 2026-01-05 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:54.518894 | orchestrator | 2026-01-05 04:01:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:54.520729 | orchestrator | 2026-01-05 04:01:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:54.520768 | orchestrator | 2026-01-05 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:01:57.570512 | orchestrator | 2026-01-05 04:01:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:01:57.572378 | orchestrator | 2026-01-05 04:01:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:01:57.572427 | orchestrator | 2026-01-05 04:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:00.624703 | orchestrator | 2026-01-05 04:02:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:00.627848 | orchestrator | 2026-01-05 04:02:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:00.628085 | orchestrator | 2026-01-05 04:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:03.676465 | orchestrator | 2026-01-05 04:02:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:03.677645 | orchestrator | 2026-01-05 04:02:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:03.677765 | orchestrator | 2026-01-05 04:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:06.728822 | orchestrator | 2026-01-05 04:02:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:06.731786 | orchestrator | 2026-01-05 04:02:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:06.731863 | orchestrator | 2026-01-05 04:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:09.782846 | orchestrator | 2026-01-05 04:02:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:09.785022 | orchestrator | 2026-01-05 04:02:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:09.839613 | orchestrator | 2026-01-05 04:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:12.832713 | orchestrator | 2026-01-05 04:02:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:12.834798 | orchestrator | 2026-01-05 04:02:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:12.834905 | orchestrator | 2026-01-05 04:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:15.886269 | orchestrator | 2026-01-05 04:02:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:15.886945 | orchestrator | 2026-01-05 04:02:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:15.886981 | orchestrator | 2026-01-05 04:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:18.936058 | orchestrator | 2026-01-05 04:02:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:18.937682 | orchestrator | 2026-01-05 04:02:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:18.938211 | orchestrator | 2026-01-05 04:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:21.988361 | orchestrator | 2026-01-05 04:02:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:21.990318 | orchestrator | 2026-01-05 04:02:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:21.990379 | orchestrator | 2026-01-05 04:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:25.039840 | orchestrator | 2026-01-05 04:02:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:25.041038 | orchestrator | 2026-01-05 04:02:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:25.041072 | orchestrator | 2026-01-05 04:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:28.085584 | orchestrator | 2026-01-05 04:02:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:28.087323 | orchestrator | 2026-01-05 04:02:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:28.087413 | orchestrator | 2026-01-05 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:31.126554 | orchestrator | 2026-01-05 04:02:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:31.128204 | orchestrator | 2026-01-05 04:02:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:31.128277 | orchestrator | 2026-01-05 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:34.172986 | orchestrator | 2026-01-05 04:02:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:34.176794 | orchestrator | 2026-01-05 04:02:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:34.176849 | orchestrator | 2026-01-05 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:37.228337 | orchestrator | 2026-01-05 04:02:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:37.230608 | orchestrator | 2026-01-05 04:02:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:37.230675 | orchestrator | 2026-01-05 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:40.275108 | orchestrator | 2026-01-05 04:02:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:40.276020 | orchestrator | 2026-01-05 04:02:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:40.276064 | orchestrator | 2026-01-05 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:43.324923 | orchestrator | 2026-01-05 04:02:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:43.326897 | orchestrator | 2026-01-05 04:02:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:43.327072 | orchestrator | 2026-01-05 04:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:46.374374 | orchestrator | 2026-01-05 04:02:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:46.375784 | orchestrator | 2026-01-05 04:02:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:46.375849 | orchestrator | 2026-01-05 04:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:49.421834 | orchestrator | 2026-01-05 04:02:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:49.422336 | orchestrator | 2026-01-05 04:02:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:49.422364 | orchestrator | 2026-01-05 04:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:52.474325 | orchestrator | 2026-01-05 04:02:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:52.475028 | orchestrator | 2026-01-05 04:02:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:52.475328 | orchestrator | 2026-01-05 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:55.525124 | orchestrator | 2026-01-05 04:02:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:55.525516 | orchestrator | 2026-01-05 04:02:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:55.525697 | orchestrator | 2026-01-05 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:02:58.570508 | orchestrator | 2026-01-05 04:02:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:02:58.572260 | orchestrator | 2026-01-05 04:02:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:02:58.572304 | orchestrator | 2026-01-05 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:01.625658 | orchestrator | 2026-01-05 04:03:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:01.627410 | orchestrator | 2026-01-05 04:03:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:01.627471 | orchestrator | 2026-01-05 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:04.678728 | orchestrator | 2026-01-05 04:03:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:04.681465 | orchestrator | 2026-01-05 04:03:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:04.681585 | orchestrator | 2026-01-05 04:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:07.738933 | orchestrator | 2026-01-05 04:03:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:07.740545 | orchestrator | 2026-01-05 04:03:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:07.740591 | orchestrator | 2026-01-05 04:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:10.794224 | orchestrator | 2026-01-05 04:03:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:10.795921 | orchestrator | 2026-01-05 04:03:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:10.795976 | orchestrator | 2026-01-05 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:13.841274 | orchestrator | 2026-01-05 04:03:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:13.844141 | orchestrator | 2026-01-05 04:03:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:13.844219 | orchestrator | 2026-01-05 04:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:16.898358 | orchestrator | 2026-01-05 04:03:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:16.899669 | orchestrator | 2026-01-05 04:03:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:16.899718 | orchestrator | 2026-01-05 04:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:19.944493 | orchestrator | 2026-01-05 04:03:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:19.946234 | orchestrator | 2026-01-05 04:03:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:19.946331 | orchestrator | 2026-01-05 04:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:22.991662 | orchestrator | 2026-01-05 04:03:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:22.992047 | orchestrator | 2026-01-05 04:03:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:22.992080 | orchestrator | 2026-01-05 04:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:26.051830 | orchestrator | 2026-01-05 04:03:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:26.055411 | orchestrator | 2026-01-05 04:03:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:26.055510 | orchestrator | 2026-01-05 04:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:29.103085 | orchestrator | 2026-01-05 04:03:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:29.103625 | orchestrator | 2026-01-05 04:03:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:29.103693 | orchestrator | 2026-01-05 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:32.154830 | orchestrator | 2026-01-05 04:03:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:32.156597 | orchestrator | 2026-01-05 04:03:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:32.156637 | orchestrator | 2026-01-05 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:35.202826 | orchestrator | 2026-01-05 04:03:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:35.205431 | orchestrator | 2026-01-05 04:03:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:35.205542 | orchestrator | 2026-01-05 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:38.254305 | orchestrator | 2026-01-05 04:03:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:38.255038 | orchestrator | 2026-01-05 04:03:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:38.255067 | orchestrator | 2026-01-05 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:41.299518 | orchestrator | 2026-01-05 04:03:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:41.300650 | orchestrator | 2026-01-05 04:03:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:41.300712 | orchestrator | 2026-01-05 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:44.347628 | orchestrator | 2026-01-05 04:03:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:44.351842 | orchestrator | 2026-01-05 04:03:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:44.351903 | orchestrator | 2026-01-05 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:47.410377 | orchestrator | 2026-01-05 04:03:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:47.412403 | orchestrator | 2026-01-05 04:03:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:47.412446 | orchestrator | 2026-01-05 04:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:50.463343 | orchestrator | 2026-01-05 04:03:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:50.465559 | orchestrator | 2026-01-05 04:03:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:50.465588 | orchestrator | 2026-01-05 04:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:53.526195 | orchestrator | 2026-01-05 04:03:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:53.527411 | orchestrator | 2026-01-05 04:03:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:53.527786 | orchestrator | 2026-01-05 04:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:56.579139 | orchestrator | 2026-01-05 04:03:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:56.581321 | orchestrator | 2026-01-05 04:03:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:56.581389 | orchestrator | 2026-01-05 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:03:59.633643 | orchestrator | 2026-01-05 04:03:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:03:59.636593 | orchestrator | 2026-01-05 04:03:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:03:59.636665 | orchestrator | 2026-01-05 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:02.692822 | orchestrator | 2026-01-05 04:04:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:02.694171 | orchestrator | 2026-01-05 04:04:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:02.694341 | orchestrator | 2026-01-05 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:05.749786 | orchestrator | 2026-01-05 04:04:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:05.751918 | orchestrator | 2026-01-05 04:04:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:05.752018 | orchestrator | 2026-01-05 04:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:08.805785 | orchestrator | 2026-01-05 04:04:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:08.807616 | orchestrator | 2026-01-05 04:04:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:08.807671 | orchestrator | 2026-01-05 04:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:11.858329 | orchestrator | 2026-01-05 04:04:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:11.859774 | orchestrator | 2026-01-05 04:04:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:11.859814 | orchestrator | 2026-01-05 04:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:14.902191 | orchestrator | 2026-01-05 04:04:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:14.905615 | orchestrator | 2026-01-05 04:04:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:14.905759 | orchestrator | 2026-01-05 04:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:17.962638 | orchestrator | 2026-01-05 04:04:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:17.964155 | orchestrator | 2026-01-05 04:04:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:17.964201 | orchestrator | 2026-01-05 04:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:21.022642 | orchestrator | 2026-01-05 04:04:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:21.024624 | orchestrator | 2026-01-05 04:04:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:21.024669 | orchestrator | 2026-01-05 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:24.068146 | orchestrator | 2026-01-05 04:04:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:24.068498 | orchestrator | 2026-01-05 04:04:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:24.068521 | orchestrator | 2026-01-05 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:27.114662 | orchestrator | 2026-01-05 04:04:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:27.117237 | orchestrator | 2026-01-05 04:04:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:27.117371 | orchestrator | 2026-01-05 04:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:30.164156 | orchestrator | 2026-01-05 04:04:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:30.165368 | orchestrator | 2026-01-05 04:04:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:30.165444 | orchestrator | 2026-01-05 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:33.204694 | orchestrator | 2026-01-05 04:04:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:33.205846 | orchestrator | 2026-01-05 04:04:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:33.205895 | orchestrator | 2026-01-05 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:36.252598 | orchestrator | 2026-01-05 04:04:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:36.254405 | orchestrator | 2026-01-05 04:04:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:36.254559 | orchestrator | 2026-01-05 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:39.302218 | orchestrator | 2026-01-05 04:04:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:39.303083 | orchestrator | 2026-01-05 04:04:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:39.303139 | orchestrator | 2026-01-05 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:42.349362 | orchestrator | 2026-01-05 04:04:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:42.350950 | orchestrator | 2026-01-05 04:04:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:42.351002 | orchestrator | 2026-01-05 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:45.401756 | orchestrator | 2026-01-05 04:04:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:45.402544 | orchestrator | 2026-01-05 04:04:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:45.402647 | orchestrator | 2026-01-05 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:48.453711 | orchestrator | 2026-01-05 04:04:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:48.456165 | orchestrator | 2026-01-05 04:04:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:48.456200 | orchestrator | 2026-01-05 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:51.503052 | orchestrator | 2026-01-05 04:04:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:51.505485 | orchestrator | 2026-01-05 04:04:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:51.505543 | orchestrator | 2026-01-05 04:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:54.551598 | orchestrator | 2026-01-05 04:04:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:54.553076 | orchestrator | 2026-01-05 04:04:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:54.553172 | orchestrator | 2026-01-05 04:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:04:57.602240 | orchestrator | 2026-01-05 04:04:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:04:57.604099 | orchestrator | 2026-01-05 04:04:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:04:57.604336 | orchestrator | 2026-01-05 04:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:00.654100 | orchestrator | 2026-01-05 04:05:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:00.655951 | orchestrator | 2026-01-05 04:05:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:00.656277 | orchestrator | 2026-01-05 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:03.699716 | orchestrator | 2026-01-05 04:05:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:03.700448 | orchestrator | 2026-01-05 04:05:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:03.700586 | orchestrator | 2026-01-05 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:06.753359 | orchestrator | 2026-01-05 04:05:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:06.754929 | orchestrator | 2026-01-05 04:05:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:06.755183 | orchestrator | 2026-01-05 04:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:09.807144 | orchestrator | 2026-01-05 04:05:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:09.808708 | orchestrator | 2026-01-05 04:05:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:09.808757 | orchestrator | 2026-01-05 04:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:12.855953 | orchestrator | 2026-01-05 04:05:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:12.857285 | orchestrator | 2026-01-05 04:05:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:12.857850 | orchestrator | 2026-01-05 04:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:15.908665 | orchestrator | 2026-01-05 04:05:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:15.911372 | orchestrator | 2026-01-05 04:05:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:15.911866 | orchestrator | 2026-01-05 04:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:18.963754 | orchestrator | 2026-01-05 04:05:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:18.965702 | orchestrator | 2026-01-05 04:05:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:18.965755 | orchestrator | 2026-01-05 04:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:22.019151 | orchestrator | 2026-01-05 04:05:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:22.021597 | orchestrator | 2026-01-05 04:05:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:22.021659 | orchestrator | 2026-01-05 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:25.066393 | orchestrator | 2026-01-05 04:05:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:25.068012 | orchestrator | 2026-01-05 04:05:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:25.068077 | orchestrator | 2026-01-05 04:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:28.123905 | orchestrator | 2026-01-05 04:05:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:28.128059 | orchestrator | 2026-01-05 04:05:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:28.128132 | orchestrator | 2026-01-05 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:31.174148 | orchestrator | 2026-01-05 04:05:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:31.175012 | orchestrator | 2026-01-05 04:05:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:31.175048 | orchestrator | 2026-01-05 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:34.226995 | orchestrator | 2026-01-05 04:05:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:34.228670 | orchestrator | 2026-01-05 04:05:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:34.228713 | orchestrator | 2026-01-05 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:37.275184 | orchestrator | 2026-01-05 04:05:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:37.277007 | orchestrator | 2026-01-05 04:05:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:37.277156 | orchestrator | 2026-01-05 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:40.324701 | orchestrator | 2026-01-05 04:05:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:40.326171 | orchestrator | 2026-01-05 04:05:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:40.326203 | orchestrator | 2026-01-05 04:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:43.378164 | orchestrator | 2026-01-05 04:05:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:43.380802 | orchestrator | 2026-01-05 04:05:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:43.381560 | orchestrator | 2026-01-05 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:46.433993 | orchestrator | 2026-01-05 04:05:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:46.435706 | orchestrator | 2026-01-05 04:05:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:46.435741 | orchestrator | 2026-01-05 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:49.475525 | orchestrator | 2026-01-05 04:05:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:49.476886 | orchestrator | 2026-01-05 04:05:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:49.476921 | orchestrator | 2026-01-05 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:52.524468 | orchestrator | 2026-01-05 04:05:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:52.526141 | orchestrator | 2026-01-05 04:05:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:52.526176 | orchestrator | 2026-01-05 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:55.574412 | orchestrator | 2026-01-05 04:05:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:55.576958 | orchestrator | 2026-01-05 04:05:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:55.577167 | orchestrator | 2026-01-05 04:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:05:58.630848 | orchestrator | 2026-01-05 04:05:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:05:58.632918 | orchestrator | 2026-01-05 04:05:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:05:58.633035 | orchestrator | 2026-01-05 04:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:01.681153 | orchestrator | 2026-01-05 04:06:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:01.681768 | orchestrator | 2026-01-05 04:06:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:01.681804 | orchestrator | 2026-01-05 04:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:04.732742 | orchestrator | 2026-01-05 04:06:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:04.735016 | orchestrator | 2026-01-05 04:06:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:04.735068 | orchestrator | 2026-01-05 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:07.782068 | orchestrator | 2026-01-05 04:06:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:07.784099 | orchestrator | 2026-01-05 04:06:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:07.784253 | orchestrator | 2026-01-05 04:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:10.836951 | orchestrator | 2026-01-05 04:06:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:10.838111 | orchestrator | 2026-01-05 04:06:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:10.838274 | orchestrator | 2026-01-05 04:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:13.890943 | orchestrator | 2026-01-05 04:06:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:13.894691 | orchestrator | 2026-01-05 04:06:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:13.894764 | orchestrator | 2026-01-05 04:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:16.951938 | orchestrator | 2026-01-05 04:06:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:16.953915 | orchestrator | 2026-01-05 04:06:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:16.953965 | orchestrator | 2026-01-05 04:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:20.003430 | orchestrator | 2026-01-05 04:06:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:20.004054 | orchestrator | 2026-01-05 04:06:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:20.004091 | orchestrator | 2026-01-05 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:23.045499 | orchestrator | 2026-01-05 04:06:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:23.046263 | orchestrator | 2026-01-05 04:06:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:23.046579 | orchestrator | 2026-01-05 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:26.087443 | orchestrator | 2026-01-05 04:06:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:26.089619 | orchestrator | 2026-01-05 04:06:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:26.089685 | orchestrator | 2026-01-05 04:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:29.134896 | orchestrator | 2026-01-05 04:06:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:29.137460 | orchestrator | 2026-01-05 04:06:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:29.137493 | orchestrator | 2026-01-05 04:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:32.180242 | orchestrator | 2026-01-05 04:06:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:32.182585 | orchestrator | 2026-01-05 04:06:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:32.182631 | orchestrator | 2026-01-05 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:35.235939 | orchestrator | 2026-01-05 04:06:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:35.237797 | orchestrator | 2026-01-05 04:06:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:35.237844 | orchestrator | 2026-01-05 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:38.289789 | orchestrator | 2026-01-05 04:06:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:38.292571 | orchestrator | 2026-01-05 04:06:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:38.292635 | orchestrator | 2026-01-05 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:41.338871 | orchestrator | 2026-01-05 04:06:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:41.343905 | orchestrator | 2026-01-05 04:06:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:41.343996 | orchestrator | 2026-01-05 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:44.400919 | orchestrator | 2026-01-05 04:06:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:44.403106 | orchestrator | 2026-01-05 04:06:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:44.403155 | orchestrator | 2026-01-05 04:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:47.461971 | orchestrator | 2026-01-05 04:06:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:47.464723 | orchestrator | 2026-01-05 04:06:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:47.464785 | orchestrator | 2026-01-05 04:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:50.512179 | orchestrator | 2026-01-05 04:06:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:50.514269 | orchestrator | 2026-01-05 04:06:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:50.514324 | orchestrator | 2026-01-05 04:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:53.565961 | orchestrator | 2026-01-05 04:06:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:53.567810 | orchestrator | 2026-01-05 04:06:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:53.567878 | orchestrator | 2026-01-05 04:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:56.625670 | orchestrator | 2026-01-05 04:06:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:56.627033 | orchestrator | 2026-01-05 04:06:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:56.627607 | orchestrator | 2026-01-05 04:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:06:59.679998 | orchestrator | 2026-01-05 04:06:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:06:59.680521 | orchestrator | 2026-01-05 04:06:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:06:59.680548 | orchestrator | 2026-01-05 04:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:02.736074 | orchestrator | 2026-01-05 04:07:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:02.738209 | orchestrator | 2026-01-05 04:07:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:02.738267 | orchestrator | 2026-01-05 04:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:05.787725 | orchestrator | 2026-01-05 04:07:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:05.789726 | orchestrator | 2026-01-05 04:07:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:05.789833 | orchestrator | 2026-01-05 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:08.836963 | orchestrator | 2026-01-05 04:07:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:08.840514 | orchestrator | 2026-01-05 04:07:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:08.840668 | orchestrator | 2026-01-05 04:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:11.893901 | orchestrator | 2026-01-05 04:07:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:11.895286 | orchestrator | 2026-01-05 04:07:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:11.895324 | orchestrator | 2026-01-05 04:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:14.948325 | orchestrator | 2026-01-05 04:07:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:14.950374 | orchestrator | 2026-01-05 04:07:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:14.950421 | orchestrator | 2026-01-05 04:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:18.001860 | orchestrator | 2026-01-05 04:07:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:18.002698 | orchestrator | 2026-01-05 04:07:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:18.002746 | orchestrator | 2026-01-05 04:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:21.056099 | orchestrator | 2026-01-05 04:07:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:21.057891 | orchestrator | 2026-01-05 04:07:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:21.057945 | orchestrator | 2026-01-05 04:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:24.111586 | orchestrator | 2026-01-05 04:07:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:24.114952 | orchestrator | 2026-01-05 04:07:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:24.115008 | orchestrator | 2026-01-05 04:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:27.164655 | orchestrator | 2026-01-05 04:07:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:27.166525 | orchestrator | 2026-01-05 04:07:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:27.166668 | orchestrator | 2026-01-05 04:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:30.225277 | orchestrator | 2026-01-05 04:07:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:30.227675 | orchestrator | 2026-01-05 04:07:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:30.227741 | orchestrator | 2026-01-05 04:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:33.274347 | orchestrator | 2026-01-05 04:07:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:33.274782 | orchestrator | 2026-01-05 04:07:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:33.274822 | orchestrator | 2026-01-05 04:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:36.322378 | orchestrator | 2026-01-05 04:07:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:36.325945 | orchestrator | 2026-01-05 04:07:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:36.326226 | orchestrator | 2026-01-05 04:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:39.373144 | orchestrator | 2026-01-05 04:07:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:39.377677 | orchestrator | 2026-01-05 04:07:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:39.377756 | orchestrator | 2026-01-05 04:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:42.430972 | orchestrator | 2026-01-05 04:07:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:42.434333 | orchestrator | 2026-01-05 04:07:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:42.434394 | orchestrator | 2026-01-05 04:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:45.485416 | orchestrator | 2026-01-05 04:07:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:45.488525 | orchestrator | 2026-01-05 04:07:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:45.488631 | orchestrator | 2026-01-05 04:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:48.544633 | orchestrator | 2026-01-05 04:07:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:48.545786 | orchestrator | 2026-01-05 04:07:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:48.545890 | orchestrator | 2026-01-05 04:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:51.594924 | orchestrator | 2026-01-05 04:07:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:51.596820 | orchestrator | 2026-01-05 04:07:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:51.596926 | orchestrator | 2026-01-05 04:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:54.652158 | orchestrator | 2026-01-05 04:07:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:54.655273 | orchestrator | 2026-01-05 04:07:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:54.655355 | orchestrator | 2026-01-05 04:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:07:57.710128 | orchestrator | 2026-01-05 04:07:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:07:57.712273 | orchestrator | 2026-01-05 04:07:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:07:57.712376 | orchestrator | 2026-01-05 04:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:00.756785 | orchestrator | 2026-01-05 04:08:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:00.757932 | orchestrator | 2026-01-05 04:08:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:00.757991 | orchestrator | 2026-01-05 04:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:03.815030 | orchestrator | 2026-01-05 04:08:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:03.817916 | orchestrator | 2026-01-05 04:08:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:03.817950 | orchestrator | 2026-01-05 04:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:06.860833 | orchestrator | 2026-01-05 04:08:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:06.862073 | orchestrator | 2026-01-05 04:08:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:06.862108 | orchestrator | 2026-01-05 04:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:09.906491 | orchestrator | 2026-01-05 04:08:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:09.908283 | orchestrator | 2026-01-05 04:08:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:09.908386 | orchestrator | 2026-01-05 04:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:12.954606 | orchestrator | 2026-01-05 04:08:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:12.956101 | orchestrator | 2026-01-05 04:08:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:12.956171 | orchestrator | 2026-01-05 04:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:16.011819 | orchestrator | 2026-01-05 04:08:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:16.012783 | orchestrator | 2026-01-05 04:08:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:16.012812 | orchestrator | 2026-01-05 04:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:19.066795 | orchestrator | 2026-01-05 04:08:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:19.068930 | orchestrator | 2026-01-05 04:08:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:19.069123 | orchestrator | 2026-01-05 04:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:22.116600 | orchestrator | 2026-01-05 04:08:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:22.118920 | orchestrator | 2026-01-05 04:08:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:22.118991 | orchestrator | 2026-01-05 04:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:25.165115 | orchestrator | 2026-01-05 04:08:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:25.166743 | orchestrator | 2026-01-05 04:08:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:25.166921 | orchestrator | 2026-01-05 04:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:28.214437 | orchestrator | 2026-01-05 04:08:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:28.217454 | orchestrator | 2026-01-05 04:08:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:28.217512 | orchestrator | 2026-01-05 04:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:31.264822 | orchestrator | 2026-01-05 04:08:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:31.265973 | orchestrator | 2026-01-05 04:08:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:31.266079 | orchestrator | 2026-01-05 04:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:34.317909 | orchestrator | 2026-01-05 04:08:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:34.320844 | orchestrator | 2026-01-05 04:08:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:34.320904 | orchestrator | 2026-01-05 04:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:37.367347 | orchestrator | 2026-01-05 04:08:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:37.368937 | orchestrator | 2026-01-05 04:08:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:37.368986 | orchestrator | 2026-01-05 04:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:40.420813 | orchestrator | 2026-01-05 04:08:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:40.422124 | orchestrator | 2026-01-05 04:08:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:40.422170 | orchestrator | 2026-01-05 04:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:43.471263 | orchestrator | 2026-01-05 04:08:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:43.472578 | orchestrator | 2026-01-05 04:08:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:43.472669 | orchestrator | 2026-01-05 04:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:46.517422 | orchestrator | 2026-01-05 04:08:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:46.521042 | orchestrator | 2026-01-05 04:08:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:46.521143 | orchestrator | 2026-01-05 04:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:49.568162 | orchestrator | 2026-01-05 04:08:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:49.569599 | orchestrator | 2026-01-05 04:08:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:49.569622 | orchestrator | 2026-01-05 04:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:52.615918 | orchestrator | 2026-01-05 04:08:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:52.617877 | orchestrator | 2026-01-05 04:08:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:52.618073 | orchestrator | 2026-01-05 04:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:55.653837 | orchestrator | 2026-01-05 04:08:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:55.656006 | orchestrator | 2026-01-05 04:08:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:55.656081 | orchestrator | 2026-01-05 04:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:08:58.704759 | orchestrator | 2026-01-05 04:08:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:08:58.706341 | orchestrator | 2026-01-05 04:08:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:08:58.706385 | orchestrator | 2026-01-05 04:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:01.763895 | orchestrator | 2026-01-05 04:09:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:01.765267 | orchestrator | 2026-01-05 04:09:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:01.765391 | orchestrator | 2026-01-05 04:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:04.814768 | orchestrator | 2026-01-05 04:09:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:04.820292 | orchestrator | 2026-01-05 04:09:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:04.820381 | orchestrator | 2026-01-05 04:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:07.874605 | orchestrator | 2026-01-05 04:09:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:07.877508 | orchestrator | 2026-01-05 04:09:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:07.877596 | orchestrator | 2026-01-05 04:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:10.931069 | orchestrator | 2026-01-05 04:09:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:10.933440 | orchestrator | 2026-01-05 04:09:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:10.933485 | orchestrator | 2026-01-05 04:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:13.988598 | orchestrator | 2026-01-05 04:09:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:13.989967 | orchestrator | 2026-01-05 04:09:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:13.990430 | orchestrator | 2026-01-05 04:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:17.041852 | orchestrator | 2026-01-05 04:09:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:17.045088 | orchestrator | 2026-01-05 04:09:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:17.045248 | orchestrator | 2026-01-05 04:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:20.093063 | orchestrator | 2026-01-05 04:09:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:20.097246 | orchestrator | 2026-01-05 04:09:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:20.097311 | orchestrator | 2026-01-05 04:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:23.151134 | orchestrator | 2026-01-05 04:09:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:23.152990 | orchestrator | 2026-01-05 04:09:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:23.153149 | orchestrator | 2026-01-05 04:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:26.199769 | orchestrator | 2026-01-05 04:09:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:26.201880 | orchestrator | 2026-01-05 04:09:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:26.201935 | orchestrator | 2026-01-05 04:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:29.250082 | orchestrator | 2026-01-05 04:09:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:29.251711 | orchestrator | 2026-01-05 04:09:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:29.251795 | orchestrator | 2026-01-05 04:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:32.298909 | orchestrator | 2026-01-05 04:09:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:32.300456 | orchestrator | 2026-01-05 04:09:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:32.300489 | orchestrator | 2026-01-05 04:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:35.352806 | orchestrator | 2026-01-05 04:09:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:35.355517 | orchestrator | 2026-01-05 04:09:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:35.355553 | orchestrator | 2026-01-05 04:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:38.409567 | orchestrator | 2026-01-05 04:09:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:38.410765 | orchestrator | 2026-01-05 04:09:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:38.411189 | orchestrator | 2026-01-05 04:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:41.458292 | orchestrator | 2026-01-05 04:09:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:41.459675 | orchestrator | 2026-01-05 04:09:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:41.459815 | orchestrator | 2026-01-05 04:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:44.507217 | orchestrator | 2026-01-05 04:09:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:44.509647 | orchestrator | 2026-01-05 04:09:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:44.509762 | orchestrator | 2026-01-05 04:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:47.560097 | orchestrator | 2026-01-05 04:09:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:47.562257 | orchestrator | 2026-01-05 04:09:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:47.562432 | orchestrator | 2026-01-05 04:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:50.613788 | orchestrator | 2026-01-05 04:09:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:50.615530 | orchestrator | 2026-01-05 04:09:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:50.615606 | orchestrator | 2026-01-05 04:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:53.659279 | orchestrator | 2026-01-05 04:09:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:53.660512 | orchestrator | 2026-01-05 04:09:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:53.660575 | orchestrator | 2026-01-05 04:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:56.712943 | orchestrator | 2026-01-05 04:09:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:56.715869 | orchestrator | 2026-01-05 04:09:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:56.716088 | orchestrator | 2026-01-05 04:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:09:59.761470 | orchestrator | 2026-01-05 04:09:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:09:59.763103 | orchestrator | 2026-01-05 04:09:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:09:59.763186 | orchestrator | 2026-01-05 04:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:02.804999 | orchestrator | 2026-01-05 04:10:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:02.807530 | orchestrator | 2026-01-05 04:10:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:02.807571 | orchestrator | 2026-01-05 04:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:05.857181 | orchestrator | 2026-01-05 04:10:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:05.859084 | orchestrator | 2026-01-05 04:10:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:05.859144 | orchestrator | 2026-01-05 04:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:08.911992 | orchestrator | 2026-01-05 04:10:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:08.913703 | orchestrator | 2026-01-05 04:10:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:08.913945 | orchestrator | 2026-01-05 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:11.966867 | orchestrator | 2026-01-05 04:10:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:11.969458 | orchestrator | 2026-01-05 04:10:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:11.969545 | orchestrator | 2026-01-05 04:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:15.021385 | orchestrator | 2026-01-05 04:10:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:15.023770 | orchestrator | 2026-01-05 04:10:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:15.023824 | orchestrator | 2026-01-05 04:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:18.074659 | orchestrator | 2026-01-05 04:10:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:18.075973 | orchestrator | 2026-01-05 04:10:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:18.076079 | orchestrator | 2026-01-05 04:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:21.122585 | orchestrator | 2026-01-05 04:10:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:21.125193 | orchestrator | 2026-01-05 04:10:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:21.125445 | orchestrator | 2026-01-05 04:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:24.174727 | orchestrator | 2026-01-05 04:10:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:24.176153 | orchestrator | 2026-01-05 04:10:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:24.176197 | orchestrator | 2026-01-05 04:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:27.224186 | orchestrator | 2026-01-05 04:10:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:27.224411 | orchestrator | 2026-01-05 04:10:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:27.224777 | orchestrator | 2026-01-05 04:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:30.277187 | orchestrator | 2026-01-05 04:10:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:30.279097 | orchestrator | 2026-01-05 04:10:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:30.279448 | orchestrator | 2026-01-05 04:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:33.324862 | orchestrator | 2026-01-05 04:10:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:33.327452 | orchestrator | 2026-01-05 04:10:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:33.327518 | orchestrator | 2026-01-05 04:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:36.373768 | orchestrator | 2026-01-05 04:10:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:36.375967 | orchestrator | 2026-01-05 04:10:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:36.376015 | orchestrator | 2026-01-05 04:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:39.425066 | orchestrator | 2026-01-05 04:10:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:39.427607 | orchestrator | 2026-01-05 04:10:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:39.427672 | orchestrator | 2026-01-05 04:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:42.476322 | orchestrator | 2026-01-05 04:10:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:42.477712 | orchestrator | 2026-01-05 04:10:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:42.477752 | orchestrator | 2026-01-05 04:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:45.531719 | orchestrator | 2026-01-05 04:10:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:45.535301 | orchestrator | 2026-01-05 04:10:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:45.535364 | orchestrator | 2026-01-05 04:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:48.589093 | orchestrator | 2026-01-05 04:10:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:48.591118 | orchestrator | 2026-01-05 04:10:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:48.591162 | orchestrator | 2026-01-05 04:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:51.644612 | orchestrator | 2026-01-05 04:10:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:51.646527 | orchestrator | 2026-01-05 04:10:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:51.646594 | orchestrator | 2026-01-05 04:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:54.704075 | orchestrator | 2026-01-05 04:10:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:54.706817 | orchestrator | 2026-01-05 04:10:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:54.706862 | orchestrator | 2026-01-05 04:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:10:57.754893 | orchestrator | 2026-01-05 04:10:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:10:57.757008 | orchestrator | 2026-01-05 04:10:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:10:57.757077 | orchestrator | 2026-01-05 04:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:00.812067 | orchestrator | 2026-01-05 04:11:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:00.815537 | orchestrator | 2026-01-05 04:11:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:00.815589 | orchestrator | 2026-01-05 04:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:03.866083 | orchestrator | 2026-01-05 04:11:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:03.867132 | orchestrator | 2026-01-05 04:11:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:03.867248 | orchestrator | 2026-01-05 04:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:06.920784 | orchestrator | 2026-01-05 04:11:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:06.922354 | orchestrator | 2026-01-05 04:11:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:06.922455 | orchestrator | 2026-01-05 04:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:09.972423 | orchestrator | 2026-01-05 04:11:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:09.973947 | orchestrator | 2026-01-05 04:11:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:09.973994 | orchestrator | 2026-01-05 04:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:13.022767 | orchestrator | 2026-01-05 04:11:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:13.024795 | orchestrator | 2026-01-05 04:11:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:13.024852 | orchestrator | 2026-01-05 04:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:16.072825 | orchestrator | 2026-01-05 04:11:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:16.073592 | orchestrator | 2026-01-05 04:11:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:16.073629 | orchestrator | 2026-01-05 04:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:19.123799 | orchestrator | 2026-01-05 04:11:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:19.124907 | orchestrator | 2026-01-05 04:11:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:19.124967 | orchestrator | 2026-01-05 04:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:22.177764 | orchestrator | 2026-01-05 04:11:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:22.179110 | orchestrator | 2026-01-05 04:11:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:22.179150 | orchestrator | 2026-01-05 04:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:25.230129 | orchestrator | 2026-01-05 04:11:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:25.232874 | orchestrator | 2026-01-05 04:11:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:25.232939 | orchestrator | 2026-01-05 04:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:28.283624 | orchestrator | 2026-01-05 04:11:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:28.286717 | orchestrator | 2026-01-05 04:11:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:28.286803 | orchestrator | 2026-01-05 04:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:31.329784 | orchestrator | 2026-01-05 04:11:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:31.331254 | orchestrator | 2026-01-05 04:11:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:31.331683 | orchestrator | 2026-01-05 04:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:34.381649 | orchestrator | 2026-01-05 04:11:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:34.383265 | orchestrator | 2026-01-05 04:11:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:34.383561 | orchestrator | 2026-01-05 04:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:37.433100 | orchestrator | 2026-01-05 04:11:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:37.434507 | orchestrator | 2026-01-05 04:11:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:37.434570 | orchestrator | 2026-01-05 04:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:40.482739 | orchestrator | 2026-01-05 04:11:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:40.482898 | orchestrator | 2026-01-05 04:11:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:40.482909 | orchestrator | 2026-01-05 04:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:43.534508 | orchestrator | 2026-01-05 04:11:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:43.536383 | orchestrator | 2026-01-05 04:11:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:43.536471 | orchestrator | 2026-01-05 04:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:46.581120 | orchestrator | 2026-01-05 04:11:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:46.583080 | orchestrator | 2026-01-05 04:11:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:46.583169 | orchestrator | 2026-01-05 04:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:49.631989 | orchestrator | 2026-01-05 04:11:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:49.633225 | orchestrator | 2026-01-05 04:11:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:49.633314 | orchestrator | 2026-01-05 04:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:52.682883 | orchestrator | 2026-01-05 04:11:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:52.686596 | orchestrator | 2026-01-05 04:11:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:52.686806 | orchestrator | 2026-01-05 04:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:55.733908 | orchestrator | 2026-01-05 04:11:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:55.736689 | orchestrator | 2026-01-05 04:11:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:55.736822 | orchestrator | 2026-01-05 04:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:11:58.787304 | orchestrator | 2026-01-05 04:11:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:11:58.789831 | orchestrator | 2026-01-05 04:11:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:11:58.789875 | orchestrator | 2026-01-05 04:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:01.835285 | orchestrator | 2026-01-05 04:12:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:01.837045 | orchestrator | 2026-01-05 04:12:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:01.837089 | orchestrator | 2026-01-05 04:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:04.883212 | orchestrator | 2026-01-05 04:12:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:04.885285 | orchestrator | 2026-01-05 04:12:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:04.885334 | orchestrator | 2026-01-05 04:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:07.930256 | orchestrator | 2026-01-05 04:12:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:07.932679 | orchestrator | 2026-01-05 04:12:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:07.932732 | orchestrator | 2026-01-05 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:10.977012 | orchestrator | 2026-01-05 04:12:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:10.978889 | orchestrator | 2026-01-05 04:12:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:10.978965 | orchestrator | 2026-01-05 04:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:14.028757 | orchestrator | 2026-01-05 04:12:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:14.030340 | orchestrator | 2026-01-05 04:12:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:14.030382 | orchestrator | 2026-01-05 04:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:17.074854 | orchestrator | 2026-01-05 04:12:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:17.075877 | orchestrator | 2026-01-05 04:12:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:17.076039 | orchestrator | 2026-01-05 04:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:20.135715 | orchestrator | 2026-01-05 04:12:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:20.137070 | orchestrator | 2026-01-05 04:12:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:20.137336 | orchestrator | 2026-01-05 04:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:23.187953 | orchestrator | 2026-01-05 04:12:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:23.189522 | orchestrator | 2026-01-05 04:12:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:23.189775 | orchestrator | 2026-01-05 04:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:26.234893 | orchestrator | 2026-01-05 04:12:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:26.235988 | orchestrator | 2026-01-05 04:12:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:26.236068 | orchestrator | 2026-01-05 04:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:29.281704 | orchestrator | 2026-01-05 04:12:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:29.282830 | orchestrator | 2026-01-05 04:12:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:29.282880 | orchestrator | 2026-01-05 04:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:32.334216 | orchestrator | 2026-01-05 04:12:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:32.336849 | orchestrator | 2026-01-05 04:12:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:32.336914 | orchestrator | 2026-01-05 04:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:35.384459 | orchestrator | 2026-01-05 04:12:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:35.386108 | orchestrator | 2026-01-05 04:12:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:35.386174 | orchestrator | 2026-01-05 04:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:38.434872 | orchestrator | 2026-01-05 04:12:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:38.437503 | orchestrator | 2026-01-05 04:12:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:38.437557 | orchestrator | 2026-01-05 04:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:41.482399 | orchestrator | 2026-01-05 04:12:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:41.484657 | orchestrator | 2026-01-05 04:12:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:41.484726 | orchestrator | 2026-01-05 04:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:44.531633 | orchestrator | 2026-01-05 04:12:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:44.534442 | orchestrator | 2026-01-05 04:12:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:44.534502 | orchestrator | 2026-01-05 04:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:47.583524 | orchestrator | 2026-01-05 04:12:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:47.586270 | orchestrator | 2026-01-05 04:12:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:47.586326 | orchestrator | 2026-01-05 04:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:50.633744 | orchestrator | 2026-01-05 04:12:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:50.636309 | orchestrator | 2026-01-05 04:12:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:50.636377 | orchestrator | 2026-01-05 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:53.682524 | orchestrator | 2026-01-05 04:12:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:53.683794 | orchestrator | 2026-01-05 04:12:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:53.684340 | orchestrator | 2026-01-05 04:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:56.724791 | orchestrator | 2026-01-05 04:12:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:56.726474 | orchestrator | 2026-01-05 04:12:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:56.726539 | orchestrator | 2026-01-05 04:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:12:59.775625 | orchestrator | 2026-01-05 04:12:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:12:59.778511 | orchestrator | 2026-01-05 04:12:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:12:59.778562 | orchestrator | 2026-01-05 04:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:02.821717 | orchestrator | 2026-01-05 04:13:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:02.826326 | orchestrator | 2026-01-05 04:13:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:02.826390 | orchestrator | 2026-01-05 04:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:05.876218 | orchestrator | 2026-01-05 04:13:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:05.880419 | orchestrator | 2026-01-05 04:13:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:05.880501 | orchestrator | 2026-01-05 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:08.934741 | orchestrator | 2026-01-05 04:13:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:08.937956 | orchestrator | 2026-01-05 04:13:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:08.938245 | orchestrator | 2026-01-05 04:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:11.986288 | orchestrator | 2026-01-05 04:13:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:11.988461 | orchestrator | 2026-01-05 04:13:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:11.988551 | orchestrator | 2026-01-05 04:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:15.039016 | orchestrator | 2026-01-05 04:13:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:15.040752 | orchestrator | 2026-01-05 04:13:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:15.040807 | orchestrator | 2026-01-05 04:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:18.095471 | orchestrator | 2026-01-05 04:13:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:18.097994 | orchestrator | 2026-01-05 04:13:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:18.098188 | orchestrator | 2026-01-05 04:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:21.151154 | orchestrator | 2026-01-05 04:13:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:21.152769 | orchestrator | 2026-01-05 04:13:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:21.152817 | orchestrator | 2026-01-05 04:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:24.207698 | orchestrator | 2026-01-05 04:13:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:24.212171 | orchestrator | 2026-01-05 04:13:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:24.212770 | orchestrator | 2026-01-05 04:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:27.272476 | orchestrator | 2026-01-05 04:13:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:27.272675 | orchestrator | 2026-01-05 04:13:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:27.272702 | orchestrator | 2026-01-05 04:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:30.318989 | orchestrator | 2026-01-05 04:13:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:30.320955 | orchestrator | 2026-01-05 04:13:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:30.321003 | orchestrator | 2026-01-05 04:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:33.371791 | orchestrator | 2026-01-05 04:13:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:33.373628 | orchestrator | 2026-01-05 04:13:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:33.373776 | orchestrator | 2026-01-05 04:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:36.422755 | orchestrator | 2026-01-05 04:13:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:36.426289 | orchestrator | 2026-01-05 04:13:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:36.426368 | orchestrator | 2026-01-05 04:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:39.476416 | orchestrator | 2026-01-05 04:13:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:39.477240 | orchestrator | 2026-01-05 04:13:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:39.477271 | orchestrator | 2026-01-05 04:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:42.526746 | orchestrator | 2026-01-05 04:13:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:42.528104 | orchestrator | 2026-01-05 04:13:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:42.528176 | orchestrator | 2026-01-05 04:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:45.581325 | orchestrator | 2026-01-05 04:13:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:45.583315 | orchestrator | 2026-01-05 04:13:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:45.583370 | orchestrator | 2026-01-05 04:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:48.636849 | orchestrator | 2026-01-05 04:13:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:48.639373 | orchestrator | 2026-01-05 04:13:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:48.639431 | orchestrator | 2026-01-05 04:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:51.696692 | orchestrator | 2026-01-05 04:13:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:51.700517 | orchestrator | 2026-01-05 04:13:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:51.700569 | orchestrator | 2026-01-05 04:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:54.755622 | orchestrator | 2026-01-05 04:13:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:54.756215 | orchestrator | 2026-01-05 04:13:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:54.756239 | orchestrator | 2026-01-05 04:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:13:57.816772 | orchestrator | 2026-01-05 04:13:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:13:57.819141 | orchestrator | 2026-01-05 04:13:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:13:57.819241 | orchestrator | 2026-01-05 04:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:00.870345 | orchestrator | 2026-01-05 04:14:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:00.872013 | orchestrator | 2026-01-05 04:14:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:00.872117 | orchestrator | 2026-01-05 04:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:03.923846 | orchestrator | 2026-01-05 04:14:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:03.925155 | orchestrator | 2026-01-05 04:14:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:03.925270 | orchestrator | 2026-01-05 04:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:06.979916 | orchestrator | 2026-01-05 04:14:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:06.982754 | orchestrator | 2026-01-05 04:14:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:06.982802 | orchestrator | 2026-01-05 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:10.040425 | orchestrator | 2026-01-05 04:14:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:10.042473 | orchestrator | 2026-01-05 04:14:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:10.042554 | orchestrator | 2026-01-05 04:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:13.094539 | orchestrator | 2026-01-05 04:14:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:13.095840 | orchestrator | 2026-01-05 04:14:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:13.095886 | orchestrator | 2026-01-05 04:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:16.146555 | orchestrator | 2026-01-05 04:14:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:16.148765 | orchestrator | 2026-01-05 04:14:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:16.149035 | orchestrator | 2026-01-05 04:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:19.201861 | orchestrator | 2026-01-05 04:14:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:19.203857 | orchestrator | 2026-01-05 04:14:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:19.204215 | orchestrator | 2026-01-05 04:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:22.257257 | orchestrator | 2026-01-05 04:14:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:22.258894 | orchestrator | 2026-01-05 04:14:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:22.259123 | orchestrator | 2026-01-05 04:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:25.306382 | orchestrator | 2026-01-05 04:14:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:25.308091 | orchestrator | 2026-01-05 04:14:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:25.308153 | orchestrator | 2026-01-05 04:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:28.360704 | orchestrator | 2026-01-05 04:14:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:28.363322 | orchestrator | 2026-01-05 04:14:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:28.363387 | orchestrator | 2026-01-05 04:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:31.409860 | orchestrator | 2026-01-05 04:14:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:31.412180 | orchestrator | 2026-01-05 04:14:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:31.412361 | orchestrator | 2026-01-05 04:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:34.463423 | orchestrator | 2026-01-05 04:14:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:34.465831 | orchestrator | 2026-01-05 04:14:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:34.465889 | orchestrator | 2026-01-05 04:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:37.505266 | orchestrator | 2026-01-05 04:14:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:37.505885 | orchestrator | 2026-01-05 04:14:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:37.506131 | orchestrator | 2026-01-05 04:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:40.557399 | orchestrator | 2026-01-05 04:14:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:40.558483 | orchestrator | 2026-01-05 04:14:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:40.558502 | orchestrator | 2026-01-05 04:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:43.607705 | orchestrator | 2026-01-05 04:14:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:43.610594 | orchestrator | 2026-01-05 04:14:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:43.610631 | orchestrator | 2026-01-05 04:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:46.667780 | orchestrator | 2026-01-05 04:14:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:46.670205 | orchestrator | 2026-01-05 04:14:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:46.670287 | orchestrator | 2026-01-05 04:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:49.718429 | orchestrator | 2026-01-05 04:14:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:49.719880 | orchestrator | 2026-01-05 04:14:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:49.719960 | orchestrator | 2026-01-05 04:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:52.771349 | orchestrator | 2026-01-05 04:14:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:52.773759 | orchestrator | 2026-01-05 04:14:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:52.773838 | orchestrator | 2026-01-05 04:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:55.825975 | orchestrator | 2026-01-05 04:14:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:55.826985 | orchestrator | 2026-01-05 04:14:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:55.827031 | orchestrator | 2026-01-05 04:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:14:58.880629 | orchestrator | 2026-01-05 04:14:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:14:58.880764 | orchestrator | 2026-01-05 04:14:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:14:58.880773 | orchestrator | 2026-01-05 04:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:01.930272 | orchestrator | 2026-01-05 04:15:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:01.932549 | orchestrator | 2026-01-05 04:15:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:01.932665 | orchestrator | 2026-01-05 04:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:04.993398 | orchestrator | 2026-01-05 04:15:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:04.994586 | orchestrator | 2026-01-05 04:15:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:04.994695 | orchestrator | 2026-01-05 04:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:08.047441 | orchestrator | 2026-01-05 04:15:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:08.049449 | orchestrator | 2026-01-05 04:15:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:08.049475 | orchestrator | 2026-01-05 04:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:11.101947 | orchestrator | 2026-01-05 04:15:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:11.102522 | orchestrator | 2026-01-05 04:15:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:11.102575 | orchestrator | 2026-01-05 04:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:14.150606 | orchestrator | 2026-01-05 04:15:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:14.152422 | orchestrator | 2026-01-05 04:15:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:14.152467 | orchestrator | 2026-01-05 04:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:17.203353 | orchestrator | 2026-01-05 04:15:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:17.204705 | orchestrator | 2026-01-05 04:15:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:17.204753 | orchestrator | 2026-01-05 04:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:20.249698 | orchestrator | 2026-01-05 04:15:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:20.251855 | orchestrator | 2026-01-05 04:15:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:20.251968 | orchestrator | 2026-01-05 04:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:23.300599 | orchestrator | 2026-01-05 04:15:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:23.308707 | orchestrator | 2026-01-05 04:15:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:23.308792 | orchestrator | 2026-01-05 04:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:26.354268 | orchestrator | 2026-01-05 04:15:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:26.357339 | orchestrator | 2026-01-05 04:15:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:26.357400 | orchestrator | 2026-01-05 04:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:29.405423 | orchestrator | 2026-01-05 04:15:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:29.407654 | orchestrator | 2026-01-05 04:15:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:29.407681 | orchestrator | 2026-01-05 04:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:32.459855 | orchestrator | 2026-01-05 04:15:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:32.462937 | orchestrator | 2026-01-05 04:15:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:32.463019 | orchestrator | 2026-01-05 04:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:35.520918 | orchestrator | 2026-01-05 04:15:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:35.524742 | orchestrator | 2026-01-05 04:15:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:35.524790 | orchestrator | 2026-01-05 04:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:38.582365 | orchestrator | 2026-01-05 04:15:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:38.584079 | orchestrator | 2026-01-05 04:15:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:38.584141 | orchestrator | 2026-01-05 04:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:41.632982 | orchestrator | 2026-01-05 04:15:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:41.634697 | orchestrator | 2026-01-05 04:15:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:41.634762 | orchestrator | 2026-01-05 04:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:44.691485 | orchestrator | 2026-01-05 04:15:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:44.693832 | orchestrator | 2026-01-05 04:15:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:44.694007 | orchestrator | 2026-01-05 04:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:47.740595 | orchestrator | 2026-01-05 04:15:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:47.741719 | orchestrator | 2026-01-05 04:15:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:47.741960 | orchestrator | 2026-01-05 04:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:50.791642 | orchestrator | 2026-01-05 04:15:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:50.793526 | orchestrator | 2026-01-05 04:15:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:50.793587 | orchestrator | 2026-01-05 04:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:53.845843 | orchestrator | 2026-01-05 04:15:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:53.847394 | orchestrator | 2026-01-05 04:15:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:53.847424 | orchestrator | 2026-01-05 04:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:56.892045 | orchestrator | 2026-01-05 04:15:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:56.893538 | orchestrator | 2026-01-05 04:15:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:56.893625 | orchestrator | 2026-01-05 04:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:15:59.941754 | orchestrator | 2026-01-05 04:15:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:15:59.943992 | orchestrator | 2026-01-05 04:15:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:15:59.944153 | orchestrator | 2026-01-05 04:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:02.990063 | orchestrator | 2026-01-05 04:16:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:02.992150 | orchestrator | 2026-01-05 04:16:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:02.992193 | orchestrator | 2026-01-05 04:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:06.036050 | orchestrator | 2026-01-05 04:16:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:06.038115 | orchestrator | 2026-01-05 04:16:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:06.038166 | orchestrator | 2026-01-05 04:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:09.084763 | orchestrator | 2026-01-05 04:16:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:09.092592 | orchestrator | 2026-01-05 04:16:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:09.092680 | orchestrator | 2026-01-05 04:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:12.136725 | orchestrator | 2026-01-05 04:16:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:12.139564 | orchestrator | 2026-01-05 04:16:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:12.139622 | orchestrator | 2026-01-05 04:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:15.186694 | orchestrator | 2026-01-05 04:16:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:15.188419 | orchestrator | 2026-01-05 04:16:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:15.188558 | orchestrator | 2026-01-05 04:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:18.240383 | orchestrator | 2026-01-05 04:16:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:18.241824 | orchestrator | 2026-01-05 04:16:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:18.241891 | orchestrator | 2026-01-05 04:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:21.288206 | orchestrator | 2026-01-05 04:16:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:21.289342 | orchestrator | 2026-01-05 04:16:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:21.289391 | orchestrator | 2026-01-05 04:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:24.331523 | orchestrator | 2026-01-05 04:16:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:24.334550 | orchestrator | 2026-01-05 04:16:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:24.334806 | orchestrator | 2026-01-05 04:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:27.389301 | orchestrator | 2026-01-05 04:16:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:27.391178 | orchestrator | 2026-01-05 04:16:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:27.391272 | orchestrator | 2026-01-05 04:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:30.441938 | orchestrator | 2026-01-05 04:16:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:30.443019 | orchestrator | 2026-01-05 04:16:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:30.443052 | orchestrator | 2026-01-05 04:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:33.491746 | orchestrator | 2026-01-05 04:16:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:33.494841 | orchestrator | 2026-01-05 04:16:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:33.494972 | orchestrator | 2026-01-05 04:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:36.549762 | orchestrator | 2026-01-05 04:16:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:36.551587 | orchestrator | 2026-01-05 04:16:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:36.551632 | orchestrator | 2026-01-05 04:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:39.597220 | orchestrator | 2026-01-05 04:16:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:39.599030 | orchestrator | 2026-01-05 04:16:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:39.599118 | orchestrator | 2026-01-05 04:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:42.648323 | orchestrator | 2026-01-05 04:16:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:42.649976 | orchestrator | 2026-01-05 04:16:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:42.650153 | orchestrator | 2026-01-05 04:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:45.688520 | orchestrator | 2026-01-05 04:16:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:45.689296 | orchestrator | 2026-01-05 04:16:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:45.689391 | orchestrator | 2026-01-05 04:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:48.739027 | orchestrator | 2026-01-05 04:16:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:48.741015 | orchestrator | 2026-01-05 04:16:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:48.741208 | orchestrator | 2026-01-05 04:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:51.783399 | orchestrator | 2026-01-05 04:16:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:51.785495 | orchestrator | 2026-01-05 04:16:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:51.785555 | orchestrator | 2026-01-05 04:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:54.831597 | orchestrator | 2026-01-05 04:16:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:54.833123 | orchestrator | 2026-01-05 04:16:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:54.833163 | orchestrator | 2026-01-05 04:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:16:57.881597 | orchestrator | 2026-01-05 04:16:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:16:57.885347 | orchestrator | 2026-01-05 04:16:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:16:57.885393 | orchestrator | 2026-01-05 04:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:00.937944 | orchestrator | 2026-01-05 04:17:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:00.939825 | orchestrator | 2026-01-05 04:17:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:00.939904 | orchestrator | 2026-01-05 04:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:03.988819 | orchestrator | 2026-01-05 04:17:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:03.990388 | orchestrator | 2026-01-05 04:17:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:03.990430 | orchestrator | 2026-01-05 04:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:07.037812 | orchestrator | 2026-01-05 04:17:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:07.039493 | orchestrator | 2026-01-05 04:17:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:07.039537 | orchestrator | 2026-01-05 04:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:10.105247 | orchestrator | 2026-01-05 04:17:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:10.106913 | orchestrator | 2026-01-05 04:17:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:10.106954 | orchestrator | 2026-01-05 04:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:13.152959 | orchestrator | 2026-01-05 04:17:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:13.153713 | orchestrator | 2026-01-05 04:17:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:13.153774 | orchestrator | 2026-01-05 04:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:16.207775 | orchestrator | 2026-01-05 04:17:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:16.209351 | orchestrator | 2026-01-05 04:17:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:16.209390 | orchestrator | 2026-01-05 04:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:19.250365 | orchestrator | 2026-01-05 04:17:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:19.252393 | orchestrator | 2026-01-05 04:17:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:19.252430 | orchestrator | 2026-01-05 04:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:22.288853 | orchestrator | 2026-01-05 04:17:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:22.292565 | orchestrator | 2026-01-05 04:17:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:22.292733 | orchestrator | 2026-01-05 04:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:25.350237 | orchestrator | 2026-01-05 04:17:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:25.352633 | orchestrator | 2026-01-05 04:17:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:25.352680 | orchestrator | 2026-01-05 04:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:28.403065 | orchestrator | 2026-01-05 04:17:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:28.405711 | orchestrator | 2026-01-05 04:17:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:28.405771 | orchestrator | 2026-01-05 04:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:31.457197 | orchestrator | 2026-01-05 04:17:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:31.457661 | orchestrator | 2026-01-05 04:17:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:31.457691 | orchestrator | 2026-01-05 04:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:34.512778 | orchestrator | 2026-01-05 04:17:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:34.515032 | orchestrator | 2026-01-05 04:17:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:34.515088 | orchestrator | 2026-01-05 04:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:37.567123 | orchestrator | 2026-01-05 04:17:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:37.569307 | orchestrator | 2026-01-05 04:17:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:37.569426 | orchestrator | 2026-01-05 04:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:40.618289 | orchestrator | 2026-01-05 04:17:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:40.621557 | orchestrator | 2026-01-05 04:17:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:40.621612 | orchestrator | 2026-01-05 04:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:43.668026 | orchestrator | 2026-01-05 04:17:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:43.669616 | orchestrator | 2026-01-05 04:17:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:43.669726 | orchestrator | 2026-01-05 04:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:46.724895 | orchestrator | 2026-01-05 04:17:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:46.726129 | orchestrator | 2026-01-05 04:17:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:46.726167 | orchestrator | 2026-01-05 04:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:49.775554 | orchestrator | 2026-01-05 04:17:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:49.778403 | orchestrator | 2026-01-05 04:17:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:49.778481 | orchestrator | 2026-01-05 04:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:52.833212 | orchestrator | 2026-01-05 04:17:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:52.835062 | orchestrator | 2026-01-05 04:17:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:52.835192 | orchestrator | 2026-01-05 04:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:55.886685 | orchestrator | 2026-01-05 04:17:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:55.887715 | orchestrator | 2026-01-05 04:17:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:55.887758 | orchestrator | 2026-01-05 04:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:17:58.937923 | orchestrator | 2026-01-05 04:17:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:17:58.943185 | orchestrator | 2026-01-05 04:17:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:17:58.943287 | orchestrator | 2026-01-05 04:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:01.998567 | orchestrator | 2026-01-05 04:18:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:02.001625 | orchestrator | 2026-01-05 04:18:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:02.001765 | orchestrator | 2026-01-05 04:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:05.053760 | orchestrator | 2026-01-05 04:18:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:05.056170 | orchestrator | 2026-01-05 04:18:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:05.056222 | orchestrator | 2026-01-05 04:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:08.102369 | orchestrator | 2026-01-05 04:18:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:08.103130 | orchestrator | 2026-01-05 04:18:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:08.103751 | orchestrator | 2026-01-05 04:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:11.152444 | orchestrator | 2026-01-05 04:18:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:11.154395 | orchestrator | 2026-01-05 04:18:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:11.154680 | orchestrator | 2026-01-05 04:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:14.203906 | orchestrator | 2026-01-05 04:18:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:14.205157 | orchestrator | 2026-01-05 04:18:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:14.205310 | orchestrator | 2026-01-05 04:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:17.259258 | orchestrator | 2026-01-05 04:18:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:17.259767 | orchestrator | 2026-01-05 04:18:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:17.259856 | orchestrator | 2026-01-05 04:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:20.304625 | orchestrator | 2026-01-05 04:18:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:20.306010 | orchestrator | 2026-01-05 04:18:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:20.306107 | orchestrator | 2026-01-05 04:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:23.359324 | orchestrator | 2026-01-05 04:18:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:23.360695 | orchestrator | 2026-01-05 04:18:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:23.360751 | orchestrator | 2026-01-05 04:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:26.412105 | orchestrator | 2026-01-05 04:18:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:26.414974 | orchestrator | 2026-01-05 04:18:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:26.415021 | orchestrator | 2026-01-05 04:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:29.474983 | orchestrator | 2026-01-05 04:18:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:29.477730 | orchestrator | 2026-01-05 04:18:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:29.477820 | orchestrator | 2026-01-05 04:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:32.523857 | orchestrator | 2026-01-05 04:18:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:32.525644 | orchestrator | 2026-01-05 04:18:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:32.525717 | orchestrator | 2026-01-05 04:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:35.570289 | orchestrator | 2026-01-05 04:18:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:35.570553 | orchestrator | 2026-01-05 04:18:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:35.570575 | orchestrator | 2026-01-05 04:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:38.616687 | orchestrator | 2026-01-05 04:18:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:38.618924 | orchestrator | 2026-01-05 04:18:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:38.619010 | orchestrator | 2026-01-05 04:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:41.664001 | orchestrator | 2026-01-05 04:18:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:41.666320 | orchestrator | 2026-01-05 04:18:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:41.666374 | orchestrator | 2026-01-05 04:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:44.713354 | orchestrator | 2026-01-05 04:18:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:44.714563 | orchestrator | 2026-01-05 04:18:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:44.714607 | orchestrator | 2026-01-05 04:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:47.760259 | orchestrator | 2026-01-05 04:18:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:47.761035 | orchestrator | 2026-01-05 04:18:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:47.762380 | orchestrator | 2026-01-05 04:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:50.810480 | orchestrator | 2026-01-05 04:18:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:50.812089 | orchestrator | 2026-01-05 04:18:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:50.812151 | orchestrator | 2026-01-05 04:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:53.864765 | orchestrator | 2026-01-05 04:18:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:53.871311 | orchestrator | 2026-01-05 04:18:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:53.871548 | orchestrator | 2026-01-05 04:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:56.918197 | orchestrator | 2026-01-05 04:18:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:56.919621 | orchestrator | 2026-01-05 04:18:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:56.919981 | orchestrator | 2026-01-05 04:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:18:59.972609 | orchestrator | 2026-01-05 04:18:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:18:59.974105 | orchestrator | 2026-01-05 04:18:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:18:59.974143 | orchestrator | 2026-01-05 04:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:03.029694 | orchestrator | 2026-01-05 04:19:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:03.031186 | orchestrator | 2026-01-05 04:19:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:03.031230 | orchestrator | 2026-01-05 04:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:06.075255 | orchestrator | 2026-01-05 04:19:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:06.079424 | orchestrator | 2026-01-05 04:19:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:06.079499 | orchestrator | 2026-01-05 04:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:09.121782 | orchestrator | 2026-01-05 04:19:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:09.122655 | orchestrator | 2026-01-05 04:19:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:09.123269 | orchestrator | 2026-01-05 04:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:12.173093 | orchestrator | 2026-01-05 04:19:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:12.174730 | orchestrator | 2026-01-05 04:19:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:12.174761 | orchestrator | 2026-01-05 04:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:15.226150 | orchestrator | 2026-01-05 04:19:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:15.228699 | orchestrator | 2026-01-05 04:19:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:15.228786 | orchestrator | 2026-01-05 04:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:18.276504 | orchestrator | 2026-01-05 04:19:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:18.278513 | orchestrator | 2026-01-05 04:19:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:18.278578 | orchestrator | 2026-01-05 04:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:21.324191 | orchestrator | 2026-01-05 04:19:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:21.326827 | orchestrator | 2026-01-05 04:19:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:21.326881 | orchestrator | 2026-01-05 04:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:24.368850 | orchestrator | 2026-01-05 04:19:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:24.371470 | orchestrator | 2026-01-05 04:19:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:24.371527 | orchestrator | 2026-01-05 04:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:27.423712 | orchestrator | 2026-01-05 04:19:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:27.425025 | orchestrator | 2026-01-05 04:19:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:27.425103 | orchestrator | 2026-01-05 04:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:30.471163 | orchestrator | 2026-01-05 04:19:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:30.473859 | orchestrator | 2026-01-05 04:19:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:30.473954 | orchestrator | 2026-01-05 04:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:33.525218 | orchestrator | 2026-01-05 04:19:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:33.526737 | orchestrator | 2026-01-05 04:19:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:33.526773 | orchestrator | 2026-01-05 04:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:36.568723 | orchestrator | 2026-01-05 04:19:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:36.570497 | orchestrator | 2026-01-05 04:19:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:36.570604 | orchestrator | 2026-01-05 04:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:39.620583 | orchestrator | 2026-01-05 04:19:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:39.622228 | orchestrator | 2026-01-05 04:19:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:39.622270 | orchestrator | 2026-01-05 04:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:42.667446 | orchestrator | 2026-01-05 04:19:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:42.668053 | orchestrator | 2026-01-05 04:19:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:42.668121 | orchestrator | 2026-01-05 04:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:45.716549 | orchestrator | 2026-01-05 04:19:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:45.717814 | orchestrator | 2026-01-05 04:19:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:45.717849 | orchestrator | 2026-01-05 04:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:48.766008 | orchestrator | 2026-01-05 04:19:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:48.767815 | orchestrator | 2026-01-05 04:19:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:48.767928 | orchestrator | 2026-01-05 04:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:51.820367 | orchestrator | 2026-01-05 04:19:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:51.822208 | orchestrator | 2026-01-05 04:19:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:51.822246 | orchestrator | 2026-01-05 04:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:54.869074 | orchestrator | 2026-01-05 04:19:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:54.869590 | orchestrator | 2026-01-05 04:19:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:54.869681 | orchestrator | 2026-01-05 04:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:19:57.913221 | orchestrator | 2026-01-05 04:19:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:19:57.913648 | orchestrator | 2026-01-05 04:19:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:19:57.913688 | orchestrator | 2026-01-05 04:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:00.963251 | orchestrator | 2026-01-05 04:20:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:00.964665 | orchestrator | 2026-01-05 04:20:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:00.964710 | orchestrator | 2026-01-05 04:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:04.019744 | orchestrator | 2026-01-05 04:20:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:04.020276 | orchestrator | 2026-01-05 04:20:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:04.020316 | orchestrator | 2026-01-05 04:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:07.065461 | orchestrator | 2026-01-05 04:20:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:07.067181 | orchestrator | 2026-01-05 04:20:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:07.067248 | orchestrator | 2026-01-05 04:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:10.117086 | orchestrator | 2026-01-05 04:20:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:10.118778 | orchestrator | 2026-01-05 04:20:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:10.119069 | orchestrator | 2026-01-05 04:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:13.162884 | orchestrator | 2026-01-05 04:20:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:13.164298 | orchestrator | 2026-01-05 04:20:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:13.164434 | orchestrator | 2026-01-05 04:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:16.216960 | orchestrator | 2026-01-05 04:20:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:16.219263 | orchestrator | 2026-01-05 04:20:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:16.219315 | orchestrator | 2026-01-05 04:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:19.271470 | orchestrator | 2026-01-05 04:20:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:19.272543 | orchestrator | 2026-01-05 04:20:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:19.272664 | orchestrator | 2026-01-05 04:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:22.326335 | orchestrator | 2026-01-05 04:20:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:22.328144 | orchestrator | 2026-01-05 04:20:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:22.328197 | orchestrator | 2026-01-05 04:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:25.373873 | orchestrator | 2026-01-05 04:20:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:25.375986 | orchestrator | 2026-01-05 04:20:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:25.376052 | orchestrator | 2026-01-05 04:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:28.427112 | orchestrator | 2026-01-05 04:20:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:28.428233 | orchestrator | 2026-01-05 04:20:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:28.428275 | orchestrator | 2026-01-05 04:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:31.471037 | orchestrator | 2026-01-05 04:20:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:31.472739 | orchestrator | 2026-01-05 04:20:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:31.472837 | orchestrator | 2026-01-05 04:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:34.522842 | orchestrator | 2026-01-05 04:20:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:34.524122 | orchestrator | 2026-01-05 04:20:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:34.524194 | orchestrator | 2026-01-05 04:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:37.572867 | orchestrator | 2026-01-05 04:20:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:37.574069 | orchestrator | 2026-01-05 04:20:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:37.574135 | orchestrator | 2026-01-05 04:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:40.622638 | orchestrator | 2026-01-05 04:20:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:40.624202 | orchestrator | 2026-01-05 04:20:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:40.624239 | orchestrator | 2026-01-05 04:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:43.668694 | orchestrator | 2026-01-05 04:20:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:43.671228 | orchestrator | 2026-01-05 04:20:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:43.671279 | orchestrator | 2026-01-05 04:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:46.718250 | orchestrator | 2026-01-05 04:20:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:46.720199 | orchestrator | 2026-01-05 04:20:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:46.720270 | orchestrator | 2026-01-05 04:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:49.769914 | orchestrator | 2026-01-05 04:20:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:49.772857 | orchestrator | 2026-01-05 04:20:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:49.772983 | orchestrator | 2026-01-05 04:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:52.816739 | orchestrator | 2026-01-05 04:20:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:52.817928 | orchestrator | 2026-01-05 04:20:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:52.817964 | orchestrator | 2026-01-05 04:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:55.871150 | orchestrator | 2026-01-05 04:20:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:55.872736 | orchestrator | 2026-01-05 04:20:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:55.872966 | orchestrator | 2026-01-05 04:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:20:58.920849 | orchestrator | 2026-01-05 04:20:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:20:58.922511 | orchestrator | 2026-01-05 04:20:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:20:58.922553 | orchestrator | 2026-01-05 04:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:01.977633 | orchestrator | 2026-01-05 04:21:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:01.979510 | orchestrator | 2026-01-05 04:21:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:01.979905 | orchestrator | 2026-01-05 04:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:05.028533 | orchestrator | 2026-01-05 04:21:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:05.029587 | orchestrator | 2026-01-05 04:21:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:05.029637 | orchestrator | 2026-01-05 04:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:08.080982 | orchestrator | 2026-01-05 04:21:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:08.082191 | orchestrator | 2026-01-05 04:21:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:08.082285 | orchestrator | 2026-01-05 04:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:11.133701 | orchestrator | 2026-01-05 04:21:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:11.135817 | orchestrator | 2026-01-05 04:21:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:11.135924 | orchestrator | 2026-01-05 04:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:14.181839 | orchestrator | 2026-01-05 04:21:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:14.183584 | orchestrator | 2026-01-05 04:21:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:14.183660 | orchestrator | 2026-01-05 04:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:17.227682 | orchestrator | 2026-01-05 04:21:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:17.229918 | orchestrator | 2026-01-05 04:21:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:17.230211 | orchestrator | 2026-01-05 04:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:20.278693 | orchestrator | 2026-01-05 04:21:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:20.281278 | orchestrator | 2026-01-05 04:21:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:20.281342 | orchestrator | 2026-01-05 04:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:23.337123 | orchestrator | 2026-01-05 04:21:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:23.338977 | orchestrator | 2026-01-05 04:21:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:23.339179 | orchestrator | 2026-01-05 04:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:26.382521 | orchestrator | 2026-01-05 04:21:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:26.384616 | orchestrator | 2026-01-05 04:21:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:26.384726 | orchestrator | 2026-01-05 04:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:29.433731 | orchestrator | 2026-01-05 04:21:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:29.434852 | orchestrator | 2026-01-05 04:21:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:29.434952 | orchestrator | 2026-01-05 04:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:32.476392 | orchestrator | 2026-01-05 04:21:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:32.478538 | orchestrator | 2026-01-05 04:21:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:32.478575 | orchestrator | 2026-01-05 04:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:35.527295 | orchestrator | 2026-01-05 04:21:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:35.527858 | orchestrator | 2026-01-05 04:21:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:35.527897 | orchestrator | 2026-01-05 04:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:38.580298 | orchestrator | 2026-01-05 04:21:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:38.581699 | orchestrator | 2026-01-05 04:21:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:38.581742 | orchestrator | 2026-01-05 04:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:41.628919 | orchestrator | 2026-01-05 04:21:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:41.629974 | orchestrator | 2026-01-05 04:21:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:41.630102 | orchestrator | 2026-01-05 04:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:44.680686 | orchestrator | 2026-01-05 04:21:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:44.682812 | orchestrator | 2026-01-05 04:21:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:44.683061 | orchestrator | 2026-01-05 04:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:47.730946 | orchestrator | 2026-01-05 04:21:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:47.734001 | orchestrator | 2026-01-05 04:21:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:47.734097 | orchestrator | 2026-01-05 04:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:50.780370 | orchestrator | 2026-01-05 04:21:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:50.782384 | orchestrator | 2026-01-05 04:21:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:50.782483 | orchestrator | 2026-01-05 04:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:53.839210 | orchestrator | 2026-01-05 04:21:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:53.840719 | orchestrator | 2026-01-05 04:21:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:53.840890 | orchestrator | 2026-01-05 04:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:56.882778 | orchestrator | 2026-01-05 04:21:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:56.884566 | orchestrator | 2026-01-05 04:21:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:56.884612 | orchestrator | 2026-01-05 04:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:21:59.931671 | orchestrator | 2026-01-05 04:21:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:21:59.934203 | orchestrator | 2026-01-05 04:21:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:21:59.934269 | orchestrator | 2026-01-05 04:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:02.983698 | orchestrator | 2026-01-05 04:22:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:02.985887 | orchestrator | 2026-01-05 04:22:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:02.986580 | orchestrator | 2026-01-05 04:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:06.045672 | orchestrator | 2026-01-05 04:22:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:06.048457 | orchestrator | 2026-01-05 04:22:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:06.048547 | orchestrator | 2026-01-05 04:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:09.092693 | orchestrator | 2026-01-05 04:22:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:09.094414 | orchestrator | 2026-01-05 04:22:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:09.094867 | orchestrator | 2026-01-05 04:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:12.136723 | orchestrator | 2026-01-05 04:22:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:12.138605 | orchestrator | 2026-01-05 04:22:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:12.138710 | orchestrator | 2026-01-05 04:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:15.185559 | orchestrator | 2026-01-05 04:22:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:15.187295 | orchestrator | 2026-01-05 04:22:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:15.187387 | orchestrator | 2026-01-05 04:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:18.232927 | orchestrator | 2026-01-05 04:22:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:18.235143 | orchestrator | 2026-01-05 04:22:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:18.235192 | orchestrator | 2026-01-05 04:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:21.281373 | orchestrator | 2026-01-05 04:22:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:21.283696 | orchestrator | 2026-01-05 04:22:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:21.283758 | orchestrator | 2026-01-05 04:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:24.329635 | orchestrator | 2026-01-05 04:22:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:24.331282 | orchestrator | 2026-01-05 04:22:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:24.331507 | orchestrator | 2026-01-05 04:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:27.385546 | orchestrator | 2026-01-05 04:22:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:27.387225 | orchestrator | 2026-01-05 04:22:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:27.387301 | orchestrator | 2026-01-05 04:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:30.436928 | orchestrator | 2026-01-05 04:22:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:30.439503 | orchestrator | 2026-01-05 04:22:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:30.439557 | orchestrator | 2026-01-05 04:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:33.487925 | orchestrator | 2026-01-05 04:22:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:33.488918 | orchestrator | 2026-01-05 04:22:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:33.489000 | orchestrator | 2026-01-05 04:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:36.530379 | orchestrator | 2026-01-05 04:22:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:36.532039 | orchestrator | 2026-01-05 04:22:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:36.532123 | orchestrator | 2026-01-05 04:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:39.584004 | orchestrator | 2026-01-05 04:22:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:39.585613 | orchestrator | 2026-01-05 04:22:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:39.585667 | orchestrator | 2026-01-05 04:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:42.636447 | orchestrator | 2026-01-05 04:22:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:42.637591 | orchestrator | 2026-01-05 04:22:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:42.637626 | orchestrator | 2026-01-05 04:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:45.689174 | orchestrator | 2026-01-05 04:22:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:45.691030 | orchestrator | 2026-01-05 04:22:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:45.691602 | orchestrator | 2026-01-05 04:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:48.740022 | orchestrator | 2026-01-05 04:22:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:48.741196 | orchestrator | 2026-01-05 04:22:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:48.741227 | orchestrator | 2026-01-05 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:51.793114 | orchestrator | 2026-01-05 04:22:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:51.795380 | orchestrator | 2026-01-05 04:22:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:51.795460 | orchestrator | 2026-01-05 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:54.841709 | orchestrator | 2026-01-05 04:22:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:54.842895 | orchestrator | 2026-01-05 04:22:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:54.842950 | orchestrator | 2026-01-05 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:22:57.889036 | orchestrator | 2026-01-05 04:22:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:22:57.891053 | orchestrator | 2026-01-05 04:22:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:22:57.891125 | orchestrator | 2026-01-05 04:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:00.935957 | orchestrator | 2026-01-05 04:23:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:00.937531 | orchestrator | 2026-01-05 04:23:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:00.937597 | orchestrator | 2026-01-05 04:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:03.986553 | orchestrator | 2026-01-05 04:23:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:03.987504 | orchestrator | 2026-01-05 04:23:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:03.987544 | orchestrator | 2026-01-05 04:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:07.037119 | orchestrator | 2026-01-05 04:23:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:07.038247 | orchestrator | 2026-01-05 04:23:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:07.038296 | orchestrator | 2026-01-05 04:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:10.084038 | orchestrator | 2026-01-05 04:23:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:10.085659 | orchestrator | 2026-01-05 04:23:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:10.085698 | orchestrator | 2026-01-05 04:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:13.128712 | orchestrator | 2026-01-05 04:23:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:13.129997 | orchestrator | 2026-01-05 04:23:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:13.130060 | orchestrator | 2026-01-05 04:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:16.179403 | orchestrator | 2026-01-05 04:23:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:16.181079 | orchestrator | 2026-01-05 04:23:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:16.181473 | orchestrator | 2026-01-05 04:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:19.228573 | orchestrator | 2026-01-05 04:23:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:19.229471 | orchestrator | 2026-01-05 04:23:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:19.229505 | orchestrator | 2026-01-05 04:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:22.271252 | orchestrator | 2026-01-05 04:23:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:22.272647 | orchestrator | 2026-01-05 04:23:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:22.272700 | orchestrator | 2026-01-05 04:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:25.318906 | orchestrator | 2026-01-05 04:23:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:25.319637 | orchestrator | 2026-01-05 04:23:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:25.319731 | orchestrator | 2026-01-05 04:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:28.371335 | orchestrator | 2026-01-05 04:23:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:28.373416 | orchestrator | 2026-01-05 04:23:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:28.373607 | orchestrator | 2026-01-05 04:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:31.430282 | orchestrator | 2026-01-05 04:23:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:31.432624 | orchestrator | 2026-01-05 04:23:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:31.432667 | orchestrator | 2026-01-05 04:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:34.484395 | orchestrator | 2026-01-05 04:23:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:34.485912 | orchestrator | 2026-01-05 04:23:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:34.485928 | orchestrator | 2026-01-05 04:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:37.529481 | orchestrator | 2026-01-05 04:23:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:37.531414 | orchestrator | 2026-01-05 04:23:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:37.531457 | orchestrator | 2026-01-05 04:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:40.579890 | orchestrator | 2026-01-05 04:23:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:40.582790 | orchestrator | 2026-01-05 04:23:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:40.582907 | orchestrator | 2026-01-05 04:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:43.630428 | orchestrator | 2026-01-05 04:23:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:43.631860 | orchestrator | 2026-01-05 04:23:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:43.631900 | orchestrator | 2026-01-05 04:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:46.675243 | orchestrator | 2026-01-05 04:23:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:46.676438 | orchestrator | 2026-01-05 04:23:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:46.676467 | orchestrator | 2026-01-05 04:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:49.722694 | orchestrator | 2026-01-05 04:23:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:49.724650 | orchestrator | 2026-01-05 04:23:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:49.724742 | orchestrator | 2026-01-05 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:52.774939 | orchestrator | 2026-01-05 04:23:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:52.777722 | orchestrator | 2026-01-05 04:23:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:52.777787 | orchestrator | 2026-01-05 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:55.827942 | orchestrator | 2026-01-05 04:23:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:55.829461 | orchestrator | 2026-01-05 04:23:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:55.829505 | orchestrator | 2026-01-05 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:23:58.876278 | orchestrator | 2026-01-05 04:23:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:23:58.877694 | orchestrator | 2026-01-05 04:23:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:23:58.877775 | orchestrator | 2026-01-05 04:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:01.927881 | orchestrator | 2026-01-05 04:24:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:01.930419 | orchestrator | 2026-01-05 04:24:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:01.930486 | orchestrator | 2026-01-05 04:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:04.980417 | orchestrator | 2026-01-05 04:24:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:04.981419 | orchestrator | 2026-01-05 04:24:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:04.981454 | orchestrator | 2026-01-05 04:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:08.030772 | orchestrator | 2026-01-05 04:24:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:08.031262 | orchestrator | 2026-01-05 04:24:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:08.031325 | orchestrator | 2026-01-05 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:11.078557 | orchestrator | 2026-01-05 04:24:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:11.080424 | orchestrator | 2026-01-05 04:24:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:11.080481 | orchestrator | 2026-01-05 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:14.128960 | orchestrator | 2026-01-05 04:24:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:14.131935 | orchestrator | 2026-01-05 04:24:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:14.132181 | orchestrator | 2026-01-05 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:17.183007 | orchestrator | 2026-01-05 04:24:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:17.185293 | orchestrator | 2026-01-05 04:24:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:17.185320 | orchestrator | 2026-01-05 04:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:20.232334 | orchestrator | 2026-01-05 04:24:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:20.232978 | orchestrator | 2026-01-05 04:24:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:20.233065 | orchestrator | 2026-01-05 04:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:23.284890 | orchestrator | 2026-01-05 04:24:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:23.287300 | orchestrator | 2026-01-05 04:24:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:23.287377 | orchestrator | 2026-01-05 04:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:26.331081 | orchestrator | 2026-01-05 04:24:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:26.331716 | orchestrator | 2026-01-05 04:24:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:26.331752 | orchestrator | 2026-01-05 04:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:29.373970 | orchestrator | 2026-01-05 04:24:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:29.376290 | orchestrator | 2026-01-05 04:24:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:29.376339 | orchestrator | 2026-01-05 04:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:32.426391 | orchestrator | 2026-01-05 04:24:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:32.428047 | orchestrator | 2026-01-05 04:24:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:32.428093 | orchestrator | 2026-01-05 04:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:35.478316 | orchestrator | 2026-01-05 04:24:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:35.480530 | orchestrator | 2026-01-05 04:24:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:35.480570 | orchestrator | 2026-01-05 04:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:38.533762 | orchestrator | 2026-01-05 04:24:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:38.535013 | orchestrator | 2026-01-05 04:24:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:38.535051 | orchestrator | 2026-01-05 04:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:41.586208 | orchestrator | 2026-01-05 04:24:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:41.588560 | orchestrator | 2026-01-05 04:24:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:41.588593 | orchestrator | 2026-01-05 04:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:44.642205 | orchestrator | 2026-01-05 04:24:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:44.644500 | orchestrator | 2026-01-05 04:24:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:44.644541 | orchestrator | 2026-01-05 04:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:47.702888 | orchestrator | 2026-01-05 04:24:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:47.705139 | orchestrator | 2026-01-05 04:24:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:47.705193 | orchestrator | 2026-01-05 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:50.757314 | orchestrator | 2026-01-05 04:24:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:50.758718 | orchestrator | 2026-01-05 04:24:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:50.758769 | orchestrator | 2026-01-05 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:53.817089 | orchestrator | 2026-01-05 04:24:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:53.819693 | orchestrator | 2026-01-05 04:24:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:53.819736 | orchestrator | 2026-01-05 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:56.869036 | orchestrator | 2026-01-05 04:24:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:56.870956 | orchestrator | 2026-01-05 04:24:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:56.871097 | orchestrator | 2026-01-05 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:24:59.920614 | orchestrator | 2026-01-05 04:24:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:24:59.924505 | orchestrator | 2026-01-05 04:24:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:24:59.924562 | orchestrator | 2026-01-05 04:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:02.978793 | orchestrator | 2026-01-05 04:25:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:02.980581 | orchestrator | 2026-01-05 04:25:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:02.980731 | orchestrator | 2026-01-05 04:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:06.032881 | orchestrator | 2026-01-05 04:25:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:06.035766 | orchestrator | 2026-01-05 04:25:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:06.035844 | orchestrator | 2026-01-05 04:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:09.081021 | orchestrator | 2026-01-05 04:25:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:09.084274 | orchestrator | 2026-01-05 04:25:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:09.084339 | orchestrator | 2026-01-05 04:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:12.131192 | orchestrator | 2026-01-05 04:25:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:12.131904 | orchestrator | 2026-01-05 04:25:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:12.131935 | orchestrator | 2026-01-05 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:15.175230 | orchestrator | 2026-01-05 04:25:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:15.177932 | orchestrator | 2026-01-05 04:25:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:15.178141 | orchestrator | 2026-01-05 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:18.227144 | orchestrator | 2026-01-05 04:25:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:18.230312 | orchestrator | 2026-01-05 04:25:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:18.230375 | orchestrator | 2026-01-05 04:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:21.273784 | orchestrator | 2026-01-05 04:25:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:21.275706 | orchestrator | 2026-01-05 04:25:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:21.275977 | orchestrator | 2026-01-05 04:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:24.323052 | orchestrator | 2026-01-05 04:25:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:24.324405 | orchestrator | 2026-01-05 04:25:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:24.324437 | orchestrator | 2026-01-05 04:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:27.402272 | orchestrator | 2026-01-05 04:25:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:27.405304 | orchestrator | 2026-01-05 04:25:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:27.405382 | orchestrator | 2026-01-05 04:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:30.447990 | orchestrator | 2026-01-05 04:25:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:30.449341 | orchestrator | 2026-01-05 04:25:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:30.449373 | orchestrator | 2026-01-05 04:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:33.495143 | orchestrator | 2026-01-05 04:25:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:33.496506 | orchestrator | 2026-01-05 04:25:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:33.496578 | orchestrator | 2026-01-05 04:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:36.544665 | orchestrator | 2026-01-05 04:25:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:36.547137 | orchestrator | 2026-01-05 04:25:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:36.547227 | orchestrator | 2026-01-05 04:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:39.594204 | orchestrator | 2026-01-05 04:25:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:39.595947 | orchestrator | 2026-01-05 04:25:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:39.596014 | orchestrator | 2026-01-05 04:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:42.649565 | orchestrator | 2026-01-05 04:25:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:42.650452 | orchestrator | 2026-01-05 04:25:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:42.650485 | orchestrator | 2026-01-05 04:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:45.699527 | orchestrator | 2026-01-05 04:25:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:45.701309 | orchestrator | 2026-01-05 04:25:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:45.701378 | orchestrator | 2026-01-05 04:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:48.750767 | orchestrator | 2026-01-05 04:25:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:48.752084 | orchestrator | 2026-01-05 04:25:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:48.752133 | orchestrator | 2026-01-05 04:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:51.805990 | orchestrator | 2026-01-05 04:25:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:51.808286 | orchestrator | 2026-01-05 04:25:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:51.808352 | orchestrator | 2026-01-05 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:54.853885 | orchestrator | 2026-01-05 04:25:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:54.855552 | orchestrator | 2026-01-05 04:25:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:54.855588 | orchestrator | 2026-01-05 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:25:57.905631 | orchestrator | 2026-01-05 04:25:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:25:57.907268 | orchestrator | 2026-01-05 04:25:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:25:57.907325 | orchestrator | 2026-01-05 04:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:00.952603 | orchestrator | 2026-01-05 04:26:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:00.955222 | orchestrator | 2026-01-05 04:26:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:00.955420 | orchestrator | 2026-01-05 04:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:04.004194 | orchestrator | 2026-01-05 04:26:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:04.008439 | orchestrator | 2026-01-05 04:26:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:04.008974 | orchestrator | 2026-01-05 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:07.056513 | orchestrator | 2026-01-05 04:26:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:07.059922 | orchestrator | 2026-01-05 04:26:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:07.059998 | orchestrator | 2026-01-05 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:10.104823 | orchestrator | 2026-01-05 04:26:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:10.106455 | orchestrator | 2026-01-05 04:26:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:10.106511 | orchestrator | 2026-01-05 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:13.146898 | orchestrator | 2026-01-05 04:26:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:13.147989 | orchestrator | 2026-01-05 04:26:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:13.148042 | orchestrator | 2026-01-05 04:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:16.197987 | orchestrator | 2026-01-05 04:26:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:16.202429 | orchestrator | 2026-01-05 04:26:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:16.202482 | orchestrator | 2026-01-05 04:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:19.253294 | orchestrator | 2026-01-05 04:26:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:19.255597 | orchestrator | 2026-01-05 04:26:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:19.255652 | orchestrator | 2026-01-05 04:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:22.311985 | orchestrator | 2026-01-05 04:26:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:22.313493 | orchestrator | 2026-01-05 04:26:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:22.313529 | orchestrator | 2026-01-05 04:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:25.363777 | orchestrator | 2026-01-05 04:26:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:25.365755 | orchestrator | 2026-01-05 04:26:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:25.365793 | orchestrator | 2026-01-05 04:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:28.421781 | orchestrator | 2026-01-05 04:26:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:28.426450 | orchestrator | 2026-01-05 04:26:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:28.426555 | orchestrator | 2026-01-05 04:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:31.476300 | orchestrator | 2026-01-05 04:26:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:31.478467 | orchestrator | 2026-01-05 04:26:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:31.478538 | orchestrator | 2026-01-05 04:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:34.531211 | orchestrator | 2026-01-05 04:26:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:34.533568 | orchestrator | 2026-01-05 04:26:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:34.533628 | orchestrator | 2026-01-05 04:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:37.590625 | orchestrator | 2026-01-05 04:26:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:37.593291 | orchestrator | 2026-01-05 04:26:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:37.593357 | orchestrator | 2026-01-05 04:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:40.639303 | orchestrator | 2026-01-05 04:26:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:40.641798 | orchestrator | 2026-01-05 04:26:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:40.641905 | orchestrator | 2026-01-05 04:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:43.689023 | orchestrator | 2026-01-05 04:26:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:43.690409 | orchestrator | 2026-01-05 04:26:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:43.690461 | orchestrator | 2026-01-05 04:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:46.738799 | orchestrator | 2026-01-05 04:26:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:46.741060 | orchestrator | 2026-01-05 04:26:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:46.741222 | orchestrator | 2026-01-05 04:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:49.785788 | orchestrator | 2026-01-05 04:26:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:49.787398 | orchestrator | 2026-01-05 04:26:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:49.787467 | orchestrator | 2026-01-05 04:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:52.844008 | orchestrator | 2026-01-05 04:26:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:52.845245 | orchestrator | 2026-01-05 04:26:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:52.845262 | orchestrator | 2026-01-05 04:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:55.904423 | orchestrator | 2026-01-05 04:26:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:55.906116 | orchestrator | 2026-01-05 04:26:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:55.906225 | orchestrator | 2026-01-05 04:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:26:58.957656 | orchestrator | 2026-01-05 04:26:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:26:58.961416 | orchestrator | 2026-01-05 04:26:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:26:58.961484 | orchestrator | 2026-01-05 04:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:02.008820 | orchestrator | 2026-01-05 04:27:02 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:02.010968 | orchestrator | 2026-01-05 04:27:02 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:02.011025 | orchestrator | 2026-01-05 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:05.061246 | orchestrator | 2026-01-05 04:27:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:05.063294 | orchestrator | 2026-01-05 04:27:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:05.063397 | orchestrator | 2026-01-05 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:08.113692 | orchestrator | 2026-01-05 04:27:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:08.113943 | orchestrator | 2026-01-05 04:27:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:08.113967 | orchestrator | 2026-01-05 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:11.169959 | orchestrator | 2026-01-05 04:27:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:11.170954 | orchestrator | 2026-01-05 04:27:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:11.171003 | orchestrator | 2026-01-05 04:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:14.220992 | orchestrator | 2026-01-05 04:27:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:14.222779 | orchestrator | 2026-01-05 04:27:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:14.223618 | orchestrator | 2026-01-05 04:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:17.274396 | orchestrator | 2026-01-05 04:27:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:17.276732 | orchestrator | 2026-01-05 04:27:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:17.276782 | orchestrator | 2026-01-05 04:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:20.323849 | orchestrator | 2026-01-05 04:27:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:20.325320 | orchestrator | 2026-01-05 04:27:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:20.325434 | orchestrator | 2026-01-05 04:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:23.378510 | orchestrator | 2026-01-05 04:27:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:23.381337 | orchestrator | 2026-01-05 04:27:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:23.381429 | orchestrator | 2026-01-05 04:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:26.437685 | orchestrator | 2026-01-05 04:27:26 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:26.439118 | orchestrator | 2026-01-05 04:27:26 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:26.439176 | orchestrator | 2026-01-05 04:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:29.487729 | orchestrator | 2026-01-05 04:27:29 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:29.488203 | orchestrator | 2026-01-05 04:27:29 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:29.488248 | orchestrator | 2026-01-05 04:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:32.531474 | orchestrator | 2026-01-05 04:27:32 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:32.533205 | orchestrator | 2026-01-05 04:27:32 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:32.533396 | orchestrator | 2026-01-05 04:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:35.579395 | orchestrator | 2026-01-05 04:27:35 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:35.579805 | orchestrator | 2026-01-05 04:27:35 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:35.579834 | orchestrator | 2026-01-05 04:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:38.622312 | orchestrator | 2026-01-05 04:27:38 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:38.624414 | orchestrator | 2026-01-05 04:27:38 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:38.624499 | orchestrator | 2026-01-05 04:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:41.672621 | orchestrator | 2026-01-05 04:27:41 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:41.673859 | orchestrator | 2026-01-05 04:27:41 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:41.674087 | orchestrator | 2026-01-05 04:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:44.721026 | orchestrator | 2026-01-05 04:27:44 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:44.723939 | orchestrator | 2026-01-05 04:27:44 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:44.724160 | orchestrator | 2026-01-05 04:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:47.767732 | orchestrator | 2026-01-05 04:27:47 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:47.768040 | orchestrator | 2026-01-05 04:27:47 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:47.768080 | orchestrator | 2026-01-05 04:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:50.815888 | orchestrator | 2026-01-05 04:27:50 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:50.817697 | orchestrator | 2026-01-05 04:27:50 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:50.817750 | orchestrator | 2026-01-05 04:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:53.863923 | orchestrator | 2026-01-05 04:27:53 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:53.866241 | orchestrator | 2026-01-05 04:27:53 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:53.866315 | orchestrator | 2026-01-05 04:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:56.914815 | orchestrator | 2026-01-05 04:27:56 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:56.917411 | orchestrator | 2026-01-05 04:27:56 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:56.917824 | orchestrator | 2026-01-05 04:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:27:59.967385 | orchestrator | 2026-01-05 04:27:59 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:27:59.968564 | orchestrator | 2026-01-05 04:27:59 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:27:59.969449 | orchestrator | 2026-01-05 04:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:03.014448 | orchestrator | 2026-01-05 04:28:03 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:03.015819 | orchestrator | 2026-01-05 04:28:03 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:03.015957 | orchestrator | 2026-01-05 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:06.063279 | orchestrator | 2026-01-05 04:28:06 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:06.064868 | orchestrator | 2026-01-05 04:28:06 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:06.064964 | orchestrator | 2026-01-05 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:09.109505 | orchestrator | 2026-01-05 04:28:09 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:09.112329 | orchestrator | 2026-01-05 04:28:09 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:09.112392 | orchestrator | 2026-01-05 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:12.158511 | orchestrator | 2026-01-05 04:28:12 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:12.161197 | orchestrator | 2026-01-05 04:28:12 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:12.161253 | orchestrator | 2026-01-05 04:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:15.212330 | orchestrator | 2026-01-05 04:28:15 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:15.215443 | orchestrator | 2026-01-05 04:28:15 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:15.215500 | orchestrator | 2026-01-05 04:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:18.260695 | orchestrator | 2026-01-05 04:28:18 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:18.262378 | orchestrator | 2026-01-05 04:28:18 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:18.262435 | orchestrator | 2026-01-05 04:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:21.305007 | orchestrator | 2026-01-05 04:28:21 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:21.305629 | orchestrator | 2026-01-05 04:28:21 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:21.305676 | orchestrator | 2026-01-05 04:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:24.359216 | orchestrator | 2026-01-05 04:28:24 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:24.361579 | orchestrator | 2026-01-05 04:28:24 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:24.361641 | orchestrator | 2026-01-05 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:27.406241 | orchestrator | 2026-01-05 04:28:27 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:27.407874 | orchestrator | 2026-01-05 04:28:27 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:27.407953 | orchestrator | 2026-01-05 04:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:30.451617 | orchestrator | 2026-01-05 04:28:30 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:30.453374 | orchestrator | 2026-01-05 04:28:30 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:30.453426 | orchestrator | 2026-01-05 04:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:33.504497 | orchestrator | 2026-01-05 04:28:33 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:33.506258 | orchestrator | 2026-01-05 04:28:33 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:33.506347 | orchestrator | 2026-01-05 04:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:36.551752 | orchestrator | 2026-01-05 04:28:36 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:36.553713 | orchestrator | 2026-01-05 04:28:36 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:36.553747 | orchestrator | 2026-01-05 04:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:39.603691 | orchestrator | 2026-01-05 04:28:39 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:39.604789 | orchestrator | 2026-01-05 04:28:39 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:39.604972 | orchestrator | 2026-01-05 04:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:42.649364 | orchestrator | 2026-01-05 04:28:42 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:42.650705 | orchestrator | 2026-01-05 04:28:42 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:42.650800 | orchestrator | 2026-01-05 04:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:45.689862 | orchestrator | 2026-01-05 04:28:45 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:45.691846 | orchestrator | 2026-01-05 04:28:45 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:45.691872 | orchestrator | 2026-01-05 04:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:48.741856 | orchestrator | 2026-01-05 04:28:48 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:48.743955 | orchestrator | 2026-01-05 04:28:48 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:48.744023 | orchestrator | 2026-01-05 04:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:51.788466 | orchestrator | 2026-01-05 04:28:51 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:51.789374 | orchestrator | 2026-01-05 04:28:51 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:51.789417 | orchestrator | 2026-01-05 04:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:54.844098 | orchestrator | 2026-01-05 04:28:54 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:54.846605 | orchestrator | 2026-01-05 04:28:54 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:54.846681 | orchestrator | 2026-01-05 04:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:28:57.904028 | orchestrator | 2026-01-05 04:28:57 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:28:57.905188 | orchestrator | 2026-01-05 04:28:57 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:28:57.905281 | orchestrator | 2026-01-05 04:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:00.956427 | orchestrator | 2026-01-05 04:29:00 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:00.958552 | orchestrator | 2026-01-05 04:29:00 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:00.958626 | orchestrator | 2026-01-05 04:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:04.015762 | orchestrator | 2026-01-05 04:29:04 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:04.017160 | orchestrator | 2026-01-05 04:29:04 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:04.017225 | orchestrator | 2026-01-05 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:07.071173 | orchestrator | 2026-01-05 04:29:07 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:07.073113 | orchestrator | 2026-01-05 04:29:07 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:07.073151 | orchestrator | 2026-01-05 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:10.109259 | orchestrator | 2026-01-05 04:29:10 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:10.110702 | orchestrator | 2026-01-05 04:29:10 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:10.111057 | orchestrator | 2026-01-05 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:13.155847 | orchestrator | 2026-01-05 04:29:13 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:13.156621 | orchestrator | 2026-01-05 04:29:13 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:13.156660 | orchestrator | 2026-01-05 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:16.202269 | orchestrator | 2026-01-05 04:29:16 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:16.204143 | orchestrator | 2026-01-05 04:29:16 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:16.204280 | orchestrator | 2026-01-05 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:19.255541 | orchestrator | 2026-01-05 04:29:19 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:19.257202 | orchestrator | 2026-01-05 04:29:19 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:19.257250 | orchestrator | 2026-01-05 04:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:22.307382 | orchestrator | 2026-01-05 04:29:22 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:22.309397 | orchestrator | 2026-01-05 04:29:22 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:22.309572 | orchestrator | 2026-01-05 04:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:25.350532 | orchestrator | 2026-01-05 04:29:25 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:25.351990 | orchestrator | 2026-01-05 04:29:25 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:25.352028 | orchestrator | 2026-01-05 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:28.398416 | orchestrator | 2026-01-05 04:29:28 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:28.400488 | orchestrator | 2026-01-05 04:29:28 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:28.400528 | orchestrator | 2026-01-05 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:31.458559 | orchestrator | 2026-01-05 04:29:31 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:31.460200 | orchestrator | 2026-01-05 04:29:31 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:31.460237 | orchestrator | 2026-01-05 04:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:34.509581 | orchestrator | 2026-01-05 04:29:34 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:34.511416 | orchestrator | 2026-01-05 04:29:34 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:34.511534 | orchestrator | 2026-01-05 04:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:37.568868 | orchestrator | 2026-01-05 04:29:37 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:37.569213 | orchestrator | 2026-01-05 04:29:37 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:37.569311 | orchestrator | 2026-01-05 04:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:40.622321 | orchestrator | 2026-01-05 04:29:40 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:40.622480 | orchestrator | 2026-01-05 04:29:40 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:40.622496 | orchestrator | 2026-01-05 04:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:43.666566 | orchestrator | 2026-01-05 04:29:43 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:43.668450 | orchestrator | 2026-01-05 04:29:43 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:43.668576 | orchestrator | 2026-01-05 04:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:46.710870 | orchestrator | 2026-01-05 04:29:46 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:46.712360 | orchestrator | 2026-01-05 04:29:46 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:46.712455 | orchestrator | 2026-01-05 04:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:49.762421 | orchestrator | 2026-01-05 04:29:49 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:49.764318 | orchestrator | 2026-01-05 04:29:49 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:49.764375 | orchestrator | 2026-01-05 04:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:52.814124 | orchestrator | 2026-01-05 04:29:52 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:52.815347 | orchestrator | 2026-01-05 04:29:52 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:52.815381 | orchestrator | 2026-01-05 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:55.864093 | orchestrator | 2026-01-05 04:29:55 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:55.865892 | orchestrator | 2026-01-05 04:29:55 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:55.865945 | orchestrator | 2026-01-05 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:29:58.913747 | orchestrator | 2026-01-05 04:29:58 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:29:58.915883 | orchestrator | 2026-01-05 04:29:58 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:29:58.916017 | orchestrator | 2026-01-05 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:01.959170 | orchestrator | 2026-01-05 04:30:01 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:01.961732 | orchestrator | 2026-01-05 04:30:01 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:01.961784 | orchestrator | 2026-01-05 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:05.008531 | orchestrator | 2026-01-05 04:30:05 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:05.010899 | orchestrator | 2026-01-05 04:30:05 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:05.011021 | orchestrator | 2026-01-05 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:08.062189 | orchestrator | 2026-01-05 04:30:08 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:08.063173 | orchestrator | 2026-01-05 04:30:08 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:08.063715 | orchestrator | 2026-01-05 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:11.114524 | orchestrator | 2026-01-05 04:30:11 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:11.116842 | orchestrator | 2026-01-05 04:30:11 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:11.117036 | orchestrator | 2026-01-05 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:14.166619 | orchestrator | 2026-01-05 04:30:14 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:14.167680 | orchestrator | 2026-01-05 04:30:14 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:14.167720 | orchestrator | 2026-01-05 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:17.218914 | orchestrator | 2026-01-05 04:30:17 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:17.219873 | orchestrator | 2026-01-05 04:30:17 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:17.220561 | orchestrator | 2026-01-05 04:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:20.263572 | orchestrator | 2026-01-05 04:30:20 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:20.263976 | orchestrator | 2026-01-05 04:30:20 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:20.264329 | orchestrator | 2026-01-05 04:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:23.314800 | orchestrator | 2026-01-05 04:30:23 | INFO  | Task e3a9f185-bcb6-4913-bb1a-d444ee1687d0 is in state STARTED 2026-01-05 04:30:23.317068 | orchestrator | 2026-01-05 04:30:23 | INFO  | Task 00e2a2c6-6b94-416a-ac35-b73676807745 is in state STARTED 2026-01-05 04:30:23.317278 | orchestrator | 2026-01-05 04:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-05 04:30:25.257019 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-05 04:30:25.259165 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 04:30:26.052937 | 2026-01-05 04:30:26.053263 | PLAY [Post output play] 2026-01-05 04:30:26.076418 | 2026-01-05 04:30:26.076675 | LOOP [stage-output : Register sources] 2026-01-05 04:30:26.131649 | 2026-01-05 04:30:26.132003 | TASK [stage-output : Check sudo] 2026-01-05 04:30:27.075185 | orchestrator | sudo: a password is required 2026-01-05 04:30:27.171096 | orchestrator | ok: Runtime: 0:00:00.012166 2026-01-05 04:30:27.178725 | 2026-01-05 04:30:27.178909 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-05 04:30:27.211539 | 2026-01-05 04:30:27.211908 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-05 04:30:27.274762 | orchestrator | ok 2026-01-05 04:30:27.283567 | 2026-01-05 04:30:27.283706 | LOOP [stage-output : Ensure target folders exist] 2026-01-05 04:30:27.809221 | orchestrator | ok: "docs" 2026-01-05 04:30:27.809515 | 2026-01-05 04:30:28.067361 | orchestrator | ok: "artifacts" 2026-01-05 04:30:28.336659 | orchestrator | ok: "logs" 2026-01-05 04:30:28.357324 | 2026-01-05 04:30:28.357517 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-05 04:30:28.396307 | 2026-01-05 04:30:28.396536 | TASK [stage-output : Make all log files readable] 2026-01-05 04:30:28.696943 | orchestrator | ok 2026-01-05 04:30:28.709552 | 2026-01-05 04:30:28.709759 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-05 04:30:28.746933 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:28.764335 | 2026-01-05 04:30:28.764553 | TASK [stage-output : Discover log files for compression] 2026-01-05 04:30:28.791909 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:28.809343 | 2026-01-05 04:30:28.809535 | LOOP [stage-output : Archive everything from logs] 2026-01-05 04:30:28.848998 | 2026-01-05 04:30:28.849182 | PLAY [Post cleanup play] 2026-01-05 04:30:28.858205 | 2026-01-05 04:30:28.858340 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 04:30:28.926143 | orchestrator | ok 2026-01-05 04:30:28.939414 | 2026-01-05 04:30:28.939578 | TASK [Set cloud fact (local deployment)] 2026-01-05 04:30:28.974735 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:28.993528 | 2026-01-05 04:30:28.993742 | TASK [Clean the cloud environment] 2026-01-05 04:30:29.669520 | orchestrator | 2026-01-05 04:30:29 - clean up servers 2026-01-05 04:30:30.541025 | orchestrator | 2026-01-05 04:30:30 - testbed-manager 2026-01-05 04:30:30.638708 | orchestrator | 2026-01-05 04:30:30 - testbed-node-5 2026-01-05 04:30:30.760049 | orchestrator | 2026-01-05 04:30:30 - testbed-node-3 2026-01-05 04:30:30.872070 | orchestrator | 2026-01-05 04:30:30 - testbed-node-2 2026-01-05 04:30:30.975257 | orchestrator | 2026-01-05 04:30:30 - testbed-node-1 2026-01-05 04:30:31.083349 | orchestrator | 2026-01-05 04:30:31 - testbed-node-4 2026-01-05 04:30:31.191099 | orchestrator | 2026-01-05 04:30:31 - testbed-node-0 2026-01-05 04:30:31.283678 | orchestrator | 2026-01-05 04:30:31 - clean up keypairs 2026-01-05 04:30:31.303025 | orchestrator | 2026-01-05 04:30:31 - testbed 2026-01-05 04:30:31.332258 | orchestrator | 2026-01-05 04:30:31 - wait for servers to be gone 2026-01-05 04:30:46.616219 | orchestrator | 2026-01-05 04:30:46 - clean up ports 2026-01-05 04:30:46.802631 | orchestrator | 2026-01-05 04:30:46 - 382bff08-51c5-4e33-85de-61f10bf90b4f 2026-01-05 04:30:47.030737 | orchestrator | 2026-01-05 04:30:47 - 569af681-f34d-43ea-8e76-83469f34ad02 2026-01-05 04:30:47.314221 | orchestrator | 2026-01-05 04:30:47 - 65f2cb5b-55a7-41c7-bb73-3cd28cae7c1a 2026-01-05 04:30:47.582180 | orchestrator | 2026-01-05 04:30:47 - 7eff5e2e-cd88-4040-96c5-b70190bbb006 2026-01-05 04:30:47.812125 | orchestrator | 2026-01-05 04:30:47 - 9faaf55f-ec2e-4af1-8545-13e57edf514a 2026-01-05 04:30:48.077184 | orchestrator | 2026-01-05 04:30:48 - a4085940-ce44-4b6e-93ce-52b2caa58ff1 2026-01-05 04:30:48.490782 | orchestrator | 2026-01-05 04:30:48 - d364b1e1-e906-4dd8-83b9-7f9597dbcefc 2026-01-05 04:30:48.712164 | orchestrator | 2026-01-05 04:30:48 - clean up volumes 2026-01-05 04:30:48.833905 | orchestrator | 2026-01-05 04:30:48 - testbed-volume-4-node-base 2026-01-05 04:30:48.871189 | orchestrator | 2026-01-05 04:30:48 - testbed-volume-3-node-base 2026-01-05 04:30:48.915769 | orchestrator | 2026-01-05 04:30:48 - testbed-volume-2-node-base 2026-01-05 04:30:48.958382 | orchestrator | 2026-01-05 04:30:48 - testbed-volume-5-node-base 2026-01-05 04:30:48.995977 | orchestrator | 2026-01-05 04:30:48 - testbed-volume-1-node-base 2026-01-05 04:30:49.043924 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-0-node-base 2026-01-05 04:30:49.088888 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-manager-base 2026-01-05 04:30:49.137749 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-4-node-4 2026-01-05 04:30:49.189197 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-2-node-5 2026-01-05 04:30:49.232866 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-5-node-5 2026-01-05 04:30:49.273708 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-1-node-4 2026-01-05 04:30:49.315981 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-6-node-3 2026-01-05 04:30:49.361874 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-3-node-3 2026-01-05 04:30:49.409920 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-8-node-5 2026-01-05 04:30:49.456603 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-0-node-3 2026-01-05 04:30:49.510884 | orchestrator | 2026-01-05 04:30:49 - testbed-volume-7-node-4 2026-01-05 04:30:49.562601 | orchestrator | 2026-01-05 04:30:49 - disconnect routers 2026-01-05 04:30:49.714321 | orchestrator | 2026-01-05 04:30:49 - testbed 2026-01-05 04:30:50.788120 | orchestrator | 2026-01-05 04:30:50 - clean up subnets 2026-01-05 04:30:50.862839 | orchestrator | 2026-01-05 04:30:50 - subnet-testbed-management 2026-01-05 04:30:51.036558 | orchestrator | 2026-01-05 04:30:51 - clean up networks 2026-01-05 04:30:51.172925 | orchestrator | 2026-01-05 04:30:51 - net-testbed-management 2026-01-05 04:30:51.535787 | orchestrator | 2026-01-05 04:30:51 - clean up security groups 2026-01-05 04:30:51.581023 | orchestrator | 2026-01-05 04:30:51 - testbed-management 2026-01-05 04:30:51.728538 | orchestrator | 2026-01-05 04:30:51 - testbed-node 2026-01-05 04:30:51.875459 | orchestrator | 2026-01-05 04:30:51 - clean up floating ips 2026-01-05 04:30:51.915259 | orchestrator | 2026-01-05 04:30:51 - 81.163.193.14 2026-01-05 04:30:52.380536 | orchestrator | 2026-01-05 04:30:52 - clean up routers 2026-01-05 04:30:52.452484 | orchestrator | 2026-01-05 04:30:52 - testbed 2026-01-05 04:30:53.560860 | orchestrator | ok: Runtime: 0:00:24.027948 2026-01-05 04:30:53.563091 | 2026-01-05 04:30:53.563197 | PLAY RECAP 2026-01-05 04:30:53.563256 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-05 04:30:53.563282 | 2026-01-05 04:30:53.737915 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 04:30:53.740496 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 04:30:54.596705 | 2026-01-05 04:30:54.596900 | PLAY [Cleanup play] 2026-01-05 04:30:54.617291 | 2026-01-05 04:30:54.617609 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 04:30:54.660364 | orchestrator | ok 2026-01-05 04:30:54.669212 | 2026-01-05 04:30:54.669384 | TASK [Set cloud fact (local deployment)] 2026-01-05 04:30:54.703729 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:54.712867 | 2026-01-05 04:30:54.713027 | TASK [Clean the cloud environment] 2026-01-05 04:30:55.941343 | orchestrator | 2026-01-05 04:30:55 - clean up servers 2026-01-05 04:30:56.539404 | orchestrator | 2026-01-05 04:30:56 - clean up keypairs 2026-01-05 04:30:56.556726 | orchestrator | 2026-01-05 04:30:56 - wait for servers to be gone 2026-01-05 04:30:56.602095 | orchestrator | 2026-01-05 04:30:56 - clean up ports 2026-01-05 04:30:56.676380 | orchestrator | 2026-01-05 04:30:56 - clean up volumes 2026-01-05 04:30:56.753925 | orchestrator | 2026-01-05 04:30:56 - disconnect routers 2026-01-05 04:30:56.778841 | orchestrator | 2026-01-05 04:30:56 - clean up subnets 2026-01-05 04:30:56.803090 | orchestrator | 2026-01-05 04:30:56 - clean up networks 2026-01-05 04:30:56.936370 | orchestrator | 2026-01-05 04:30:56 - clean up security groups 2026-01-05 04:30:56.971611 | orchestrator | 2026-01-05 04:30:56 - clean up floating ips 2026-01-05 04:30:56.997609 | orchestrator | 2026-01-05 04:30:56 - clean up routers 2026-01-05 04:30:57.255653 | orchestrator | ok: Runtime: 0:00:01.501213 2026-01-05 04:30:57.259649 | 2026-01-05 04:30:57.259864 | PLAY RECAP 2026-01-05 04:30:57.259998 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-05 04:30:57.260065 | 2026-01-05 04:30:57.411884 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 04:30:57.413019 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 04:30:58.205633 | 2026-01-05 04:30:58.205848 | PLAY [Base post-fetch] 2026-01-05 04:30:58.224583 | 2026-01-05 04:30:58.224744 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-05 04:30:58.282033 | orchestrator | skipping: Conditional result was False 2026-01-05 04:30:58.297683 | 2026-01-05 04:30:58.297981 | TASK [fetch-output : Set log path for single node] 2026-01-05 04:30:58.356239 | orchestrator | ok 2026-01-05 04:30:58.365653 | 2026-01-05 04:30:58.365821 | LOOP [fetch-output : Ensure local output dirs] 2026-01-05 04:30:58.875504 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/work/logs" 2026-01-05 04:30:59.189009 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/work/artifacts" 2026-01-05 04:30:59.468679 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/5fd7b39a1a694aa3b9baae85283a997d/work/docs" 2026-01-05 04:30:59.498665 | 2026-01-05 04:30:59.498824 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-05 04:31:00.474281 | orchestrator | changed: .d..t...... ./ 2026-01-05 04:31:00.474555 | orchestrator | changed: All items complete 2026-01-05 04:31:00.474602 | 2026-01-05 04:31:01.169694 | orchestrator | changed: .d..t...... ./ 2026-01-05 04:31:01.918505 | orchestrator | changed: .d..t...... ./ 2026-01-05 04:31:01.938220 | 2026-01-05 04:31:01.938369 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-05 04:31:01.976228 | orchestrator | skipping: Conditional result was False 2026-01-05 04:31:01.982457 | orchestrator | skipping: Conditional result was False 2026-01-05 04:31:02.003809 | 2026-01-05 04:31:02.003979 | PLAY RECAP 2026-01-05 04:31:02.004060 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-05 04:31:02.004099 | 2026-01-05 04:31:02.158039 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 04:31:02.159183 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 04:31:02.971402 | 2026-01-05 04:31:02.971589 | PLAY [Base post] 2026-01-05 04:31:02.987061 | 2026-01-05 04:31:02.987229 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-05 04:31:04.046605 | orchestrator | changed 2026-01-05 04:31:04.056259 | 2026-01-05 04:31:04.056403 | PLAY RECAP 2026-01-05 04:31:04.056466 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-05 04:31:04.056528 | 2026-01-05 04:31:04.246374 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 04:31:04.249241 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-05 04:31:05.063094 | 2026-01-05 04:31:05.063272 | PLAY [Base post-logs] 2026-01-05 04:31:05.074489 | 2026-01-05 04:31:05.074655 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-05 04:31:05.567605 | localhost | changed 2026-01-05 04:31:05.578303 | 2026-01-05 04:31:05.578478 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-05 04:31:05.617103 | localhost | ok 2026-01-05 04:31:05.624176 | 2026-01-05 04:31:05.624357 | TASK [Set zuul-log-path fact] 2026-01-05 04:31:05.653620 | localhost | ok 2026-01-05 04:31:05.670803 | 2026-01-05 04:31:05.671080 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 04:31:05.709335 | localhost | ok 2026-01-05 04:31:05.716219 | 2026-01-05 04:31:05.716382 | TASK [upload-logs : Create log directories] 2026-01-05 04:31:06.258286 | localhost | changed 2026-01-05 04:31:06.264214 | 2026-01-05 04:31:06.264493 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-05 04:31:06.880965 | localhost -> localhost | ok: Runtime: 0:00:00.011755 2026-01-05 04:31:06.890341 | 2026-01-05 04:31:06.890558 | TASK [upload-logs : Upload logs to log server] 2026-01-05 04:31:07.510070 | localhost | Output suppressed because no_log was given 2026-01-05 04:31:07.512991 | 2026-01-05 04:31:07.513256 | LOOP [upload-logs : Compress console log and json output] 2026-01-05 04:31:07.578203 | localhost | skipping: Conditional result was False 2026-01-05 04:31:07.582987 | localhost | skipping: Conditional result was False 2026-01-05 04:31:07.596221 | 2026-01-05 04:31:07.596429 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-05 04:31:07.648253 | localhost | skipping: Conditional result was False 2026-01-05 04:31:07.648893 | 2026-01-05 04:31:07.652513 | localhost | skipping: Conditional result was False 2026-01-05 04:31:07.659830 | 2026-01-05 04:31:07.660068 | LOOP [upload-logs : Upload console log and json output]