2025-03-22 22:09:59.463261 | Job console starting... 2025-03-22 22:09:59.472863 | Updating repositories 2025-03-22 22:09:59.537463 | Preparing job workspace 2025-03-22 22:10:00.956017 | Running Ansible setup... 2025-03-22 22:10:05.670082 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-03-22 22:10:06.363306 | 2025-03-22 22:10:06.363457 | PLAY [Base pre] 2025-03-22 22:10:06.394686 | 2025-03-22 22:10:06.394815 | TASK [Setup log path fact] 2025-03-22 22:10:06.428466 | orchestrator | ok 2025-03-22 22:10:06.448140 | 2025-03-22 22:10:06.448267 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-22 22:10:06.491510 | orchestrator | ok 2025-03-22 22:10:06.507282 | 2025-03-22 22:10:06.507387 | TASK [emit-job-header : Print job information] 2025-03-22 22:10:06.568541 | # Job Information 2025-03-22 22:10:06.568776 | Ansible Version: 2.15.3 2025-03-22 22:10:06.568829 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-03-22 22:10:06.568876 | Pipeline: post 2025-03-22 22:10:06.568911 | Executor: 7d211f194f6a 2025-03-22 22:10:06.568942 | Triggered by: https://github.com/osism/testbed/commit/d89d88ac14b9b2bc3a6306d70c2b934eb232a2de 2025-03-22 22:10:06.568973 | Event ID: fd4d9c5e-0766-11f0-88f3-69602e84d0e4 2025-03-22 22:10:06.578236 | 2025-03-22 22:10:06.578354 | LOOP [emit-job-header : Print node information] 2025-03-22 22:10:06.731929 | orchestrator | ok: 2025-03-22 22:10:06.732162 | orchestrator | # Node Information 2025-03-22 22:10:06.732221 | orchestrator | Inventory Hostname: orchestrator 2025-03-22 22:10:06.732267 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-03-22 22:10:06.732307 | orchestrator | Username: zuul-testbed03 2025-03-22 22:10:06.732344 | orchestrator | Distro: Debian 12.10 2025-03-22 22:10:06.732431 | orchestrator | Provider: static-testbed 2025-03-22 22:10:06.732477 | orchestrator | Label: testbed-orchestrator 2025-03-22 22:10:06.732517 | orchestrator | Product Name: OpenStack Nova 2025-03-22 22:10:06.732554 | orchestrator | Interface IP: 81.163.193.140 2025-03-22 22:10:06.760618 | 2025-03-22 22:10:06.760859 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-03-22 22:10:07.218890 | orchestrator -> localhost | changed 2025-03-22 22:10:07.237566 | 2025-03-22 22:10:07.237779 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-03-22 22:10:08.252837 | orchestrator -> localhost | changed 2025-03-22 22:10:08.282760 | 2025-03-22 22:10:08.282890 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-03-22 22:10:08.565222 | orchestrator -> localhost | ok 2025-03-22 22:10:08.573916 | 2025-03-22 22:10:08.574043 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-03-22 22:10:08.605656 | orchestrator | ok 2025-03-22 22:10:08.623045 | orchestrator | included: /var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-03-22 22:10:08.631875 | 2025-03-22 22:10:08.631980 | TASK [add-build-sshkey : Create Temp SSH key] 2025-03-22 22:10:09.187504 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-03-22 22:10:09.187945 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/work/0ad214290bf44f7ebea1a2f7a3cd85b0_id_rsa 2025-03-22 22:10:09.188051 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/work/0ad214290bf44f7ebea1a2f7a3cd85b0_id_rsa.pub 2025-03-22 22:10:09.188118 | orchestrator -> localhost | The key fingerprint is: 2025-03-22 22:10:09.188227 | orchestrator -> localhost | SHA256:aLkw/huhPkb9VloHTTHOgH1osenXUQjv7i+zW+wFbUE zuul-build-sshkey 2025-03-22 22:10:09.188289 | orchestrator -> localhost | The key's randomart image is: 2025-03-22 22:10:09.188344 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-03-22 22:10:09.188461 | orchestrator -> localhost | | ooo+o E.| 2025-03-22 22:10:09.188520 | orchestrator -> localhost | | . +*ooo. | 2025-03-22 22:10:09.188609 | orchestrator -> localhost | | .o+o o. | 2025-03-22 22:10:09.188662 | orchestrator -> localhost | | o .. .o o.| 2025-03-22 22:10:09.188780 | orchestrator -> localhost | | o.= S ... + o| 2025-03-22 22:10:09.188837 | orchestrator -> localhost | | ..=.o o... + | 2025-03-22 22:10:09.188903 | orchestrator -> localhost | | .o o. + . . +| 2025-03-22 22:10:09.188958 | orchestrator -> localhost | | .o. .+ .oo.| 2025-03-22 22:10:09.189009 | orchestrator -> localhost | | ...oo +*o| 2025-03-22 22:10:09.189061 | orchestrator -> localhost | +----[SHA256]-----+ 2025-03-22 22:10:09.189179 | orchestrator -> localhost | ok: Runtime: 0:00:00.057348 2025-03-22 22:10:09.205705 | 2025-03-22 22:10:09.205840 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-03-22 22:10:09.254465 | orchestrator | ok 2025-03-22 22:10:09.267737 | orchestrator | included: /var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-03-22 22:10:09.279023 | 2025-03-22 22:10:09.279127 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-03-22 22:10:09.315230 | orchestrator | skipping: Conditional result was False 2025-03-22 22:10:09.332082 | 2025-03-22 22:10:09.332208 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-03-22 22:10:10.078994 | orchestrator | changed 2025-03-22 22:10:10.089528 | 2025-03-22 22:10:10.089644 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-03-22 22:10:10.348224 | orchestrator | ok 2025-03-22 22:10:10.358384 | 2025-03-22 22:10:10.358493 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-03-22 22:10:10.732996 | orchestrator | ok 2025-03-22 22:10:10.742685 | 2025-03-22 22:10:10.742802 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-03-22 22:10:11.131685 | orchestrator | ok 2025-03-22 22:10:11.142287 | 2025-03-22 22:10:11.142417 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-03-22 22:10:11.168972 | orchestrator | skipping: Conditional result was False 2025-03-22 22:10:11.214967 | 2025-03-22 22:10:11.215081 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-03-22 22:10:11.630723 | orchestrator -> localhost | changed 2025-03-22 22:10:11.647036 | 2025-03-22 22:10:11.647155 | TASK [add-build-sshkey : Add back temp key] 2025-03-22 22:10:11.987479 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/work/0ad214290bf44f7ebea1a2f7a3cd85b0_id_rsa (zuul-build-sshkey) 2025-03-22 22:10:11.987830 | orchestrator -> localhost | ok: Runtime: 0:00:00.016110 2025-03-22 22:10:11.996569 | 2025-03-22 22:10:11.996698 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-03-22 22:10:12.359917 | orchestrator | ok 2025-03-22 22:10:12.368124 | 2025-03-22 22:10:12.368240 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-03-22 22:10:12.404005 | orchestrator | skipping: Conditional result was False 2025-03-22 22:10:12.420542 | 2025-03-22 22:10:12.420653 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-03-22 22:10:12.785553 | orchestrator | ok 2025-03-22 22:10:12.803893 | 2025-03-22 22:10:12.804008 | TASK [validate-host : Define zuul_info_dir fact] 2025-03-22 22:10:12.849257 | orchestrator | ok 2025-03-22 22:10:12.858275 | 2025-03-22 22:10:12.858385 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-03-22 22:10:13.148061 | orchestrator -> localhost | ok 2025-03-22 22:10:13.164099 | 2025-03-22 22:10:13.164242 | TASK [validate-host : Collect information about the host] 2025-03-22 22:10:14.310906 | orchestrator | ok 2025-03-22 22:10:14.327763 | 2025-03-22 22:10:14.327884 | TASK [validate-host : Sanitize hostname] 2025-03-22 22:10:14.409858 | orchestrator | ok 2025-03-22 22:10:14.419180 | 2025-03-22 22:10:14.419297 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-03-22 22:10:14.968865 | orchestrator -> localhost | changed 2025-03-22 22:10:14.985543 | 2025-03-22 22:10:14.985755 | TASK [validate-host : Collect information about zuul worker] 2025-03-22 22:10:15.493100 | orchestrator | ok 2025-03-22 22:10:15.501842 | 2025-03-22 22:10:15.501968 | TASK [validate-host : Write out all zuul information for each host] 2025-03-22 22:10:16.027713 | orchestrator -> localhost | changed 2025-03-22 22:10:16.052689 | 2025-03-22 22:10:16.052832 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-03-22 22:10:16.338234 | orchestrator | ok 2025-03-22 22:10:16.348278 | 2025-03-22 22:10:16.348428 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-03-22 22:10:33.729312 | orchestrator | changed: 2025-03-22 22:10:33.729546 | orchestrator | .d..t...... src/ 2025-03-22 22:10:33.729590 | orchestrator | .d..t...... src/github.com/ 2025-03-22 22:10:33.729620 | orchestrator | .d..t...... src/github.com/osism/ 2025-03-22 22:10:33.729646 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-03-22 22:10:33.729688 | orchestrator | RedHat.yml 2025-03-22 22:10:33.744518 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-03-22 22:10:33.744535 | orchestrator | RedHat.yml 2025-03-22 22:10:33.744588 | orchestrator | = 1.53.0"... 2025-03-22 22:10:44.854572 | orchestrator | 22:10:44.854 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-03-22 22:10:46.075332 | orchestrator | 22:10:46.075 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-03-22 22:10:47.374904 | orchestrator | 22:10:47.374 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-03-22 22:10:48.339261 | orchestrator | 22:10:48.339 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-03-22 22:10:49.184756 | orchestrator | 22:10:49.184 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-03-22 22:10:50.458509 | orchestrator | 22:10:50.458 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-03-22 22:10:51.687458 | orchestrator | 22:10:51.687 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-03-22 22:10:51.687544 | orchestrator | 22:10:51.687 STDOUT terraform: Providers are signed by their developers. 2025-03-22 22:10:51.687562 | orchestrator | 22:10:51.687 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-03-22 22:10:51.687586 | orchestrator | 22:10:51.687 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-03-22 22:10:51.687603 | orchestrator | 22:10:51.687 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-03-22 22:10:51.687618 | orchestrator | 22:10:51.687 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-03-22 22:10:51.687633 | orchestrator | 22:10:51.687 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-03-22 22:10:51.687671 | orchestrator | 22:10:51.687 STDOUT terraform: you run "tofu init" in the future. 2025-03-22 22:10:51.690309 | orchestrator | 22:10:51.690 STDOUT terraform: OpenTofu has been successfully initialized! 2025-03-22 22:10:51.690413 | orchestrator | 22:10:51.690 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-03-22 22:10:51.690539 | orchestrator | 22:10:51.690 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-03-22 22:10:51.690575 | orchestrator | 22:10:51.690 STDOUT terraform: should now work. 2025-03-22 22:10:51.690714 | orchestrator | 22:10:51.690 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-03-22 22:10:51.690834 | orchestrator | 22:10:51.690 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-03-22 22:10:51.691000 | orchestrator | 22:10:51.690 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-03-22 22:10:51.792404 | orchestrator | 22:10:51.792 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-03-22 22:10:51.945990 | orchestrator | 22:10:51.945 STDOUT terraform: Created and switched to workspace "ci"! 2025-03-22 22:10:51.946045 | orchestrator | 22:10:51.945 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-03-22 22:10:51.946095 | orchestrator | 22:10:51.946 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-03-22 22:10:51.946115 | orchestrator | 22:10:51.946 STDOUT terraform: for this configuration. 2025-03-22 22:10:52.116844 | orchestrator | 22:10:52.116 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-03-22 22:10:52.183955 | orchestrator | 22:10:52.183 STDOUT terraform: ci.auto.tfvars 2025-03-22 22:10:52.323361 | orchestrator | 22:10:52.323 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-03-22 22:10:53.107918 | orchestrator | 22:10:53.107 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-03-22 22:10:53.597113 | orchestrator | 22:10:53.596 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-03-22 22:10:53.768884 | orchestrator | 22:10:53.768 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-03-22 22:10:53.768955 | orchestrator | 22:10:53.768 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-03-22 22:10:53.768997 | orchestrator | 22:10:53.768 STDOUT terraform:  + create 2025-03-22 22:10:53.769056 | orchestrator | 22:10:53.768 STDOUT terraform:  <= read (data resources) 2025-03-22 22:10:53.769130 | orchestrator | 22:10:53.769 STDOUT terraform: OpenTofu will perform the following actions: 2025-03-22 22:10:53.769329 | orchestrator | 22:10:53.769 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-03-22 22:10:53.769405 | orchestrator | 22:10:53.769 STDOUT terraform:  # (config refers to values not yet known) 2025-03-22 22:10:53.769485 | orchestrator | 22:10:53.769 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-03-22 22:10:53.769563 | orchestrator | 22:10:53.769 STDOUT terraform:  + checksum = (known after apply) 2025-03-22 22:10:53.769640 | orchestrator | 22:10:53.769 STDOUT terraform:  + created_at = (known after apply) 2025-03-22 22:10:53.769723 | orchestrator | 22:10:53.769 STDOUT terraform:  + file = (known after apply) 2025-03-22 22:10:53.769805 | orchestrator | 22:10:53.769 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.769882 | orchestrator | 22:10:53.769 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.769959 | orchestrator | 22:10:53.769 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-22 22:10:53.770080 | orchestrator | 22:10:53.769 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-22 22:10:53.770130 | orchestrator | 22:10:53.770 STDOUT terraform:  + most_recent = true 2025-03-22 22:10:53.770205 | orchestrator | 22:10:53.770 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.770297 | orchestrator | 22:10:53.770 STDOUT terraform:  + protected = (known after apply) 2025-03-22 22:10:53.770372 | orchestrator | 22:10:53.770 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.770448 | orchestrator | 22:10:53.770 STDOUT terraform:  + schema = (known after apply) 2025-03-22 22:10:53.770525 | orchestrator | 22:10:53.770 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-22 22:10:53.770607 | orchestrator | 22:10:53.770 STDOUT terraform:  + tags = (known after apply) 2025-03-22 22:10:53.770689 | orchestrator | 22:10:53.770 STDOUT terraform:  + updated_at = (known after apply) 2025-03-22 22:10:53.770721 | orchestrator | 22:10:53.770 STDOUT terraform:  } 2025-03-22 22:10:53.770850 | orchestrator | 22:10:53.770 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-03-22 22:10:53.770947 | orchestrator | 22:10:53.770 STDOUT terraform:  # (config refers to values not yet known) 2025-03-22 22:10:53.771049 | orchestrator | 22:10:53.770 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-03-22 22:10:53.771129 | orchestrator | 22:10:53.771 STDOUT terraform:  + checksum = (known after apply) 2025-03-22 22:10:53.771205 | orchestrator | 22:10:53.771 STDOUT terraform:  + created_at = (known after apply) 2025-03-22 22:10:53.771299 | orchestrator | 22:10:53.771 STDOUT terraform:  + file = (known after apply) 2025-03-22 22:10:53.771376 | orchestrator | 22:10:53.771 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.771514 | orchestrator | 22:10:53.771 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.771634 | orchestrator | 22:10:53.771 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-22 22:10:53.771726 | orchestrator | 22:10:53.771 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-22 22:10:53.771778 | orchestrator | 22:10:53.771 STDOUT terraform:  + most_recent = true 2025-03-22 22:10:53.771855 | orchestrator | 22:10:53.771 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.771931 | orchestrator | 22:10:53.771 STDOUT terraform:  + protected = (known after apply) 2025-03-22 22:10:53.772008 | orchestrator | 22:10:53.771 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.772084 | orchestrator | 22:10:53.772 STDOUT terraform:  + schema = (known after apply) 2025-03-22 22:10:53.772161 | orchestrator | 22:10:53.772 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-22 22:10:53.772265 | orchestrator | 22:10:53.772 STDOUT terraform:  + tags = (known after apply) 2025-03-22 22:10:53.772342 | orchestrator | 22:10:53.772 STDOUT terraform:  + updated_at = (known after apply) 2025-03-22 22:10:53.772380 | orchestrator | 22:10:53.772 STDOUT terraform:  } 2025-03-22 22:10:53.773053 | orchestrator | 22:10:53.772 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-03-22 22:10:53.773140 | orchestrator | 22:10:53.773 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-03-22 22:10:53.773256 | orchestrator | 22:10:53.773 STDOUT terraform:  + content = (known after apply) 2025-03-22 22:10:53.773351 | orchestrator | 22:10:53.773 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-22 22:10:53.773446 | orchestrator | 22:10:53.773 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-22 22:10:53.773534 | orchestrator | 22:10:53.773 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-22 22:10:53.773630 | orchestrator | 22:10:53.773 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-22 22:10:53.773732 | orchestrator | 22:10:53.773 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-22 22:10:53.773828 | orchestrator | 22:10:53.773 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-22 22:10:53.773893 | orchestrator | 22:10:53.773 STDOUT terraform:  + directory_permission = "0777" 2025-03-22 22:10:53.773958 | orchestrator | 22:10:53.773 STDOUT terraform:  + file_permission = "0644" 2025-03-22 22:10:53.774095 | orchestrator | 22:10:53.773 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-03-22 22:10:53.774182 | orchestrator | 22:10:53.774 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.774224 | orchestrator | 22:10:53.774 STDOUT terraform:  } 2025-03-22 22:10:53.774303 | orchestrator | 22:10:53.774 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-03-22 22:10:53.774370 | orchestrator | 22:10:53.774 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-03-22 22:10:53.774467 | orchestrator | 22:10:53.774 STDOUT terraform:  + content = (known after apply) 2025-03-22 22:10:53.774561 | orchestrator | 22:10:53.774 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-22 22:10:53.774653 | orchestrator | 22:10:53.774 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-22 22:10:53.774749 | orchestrator | 22:10:53.774 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-22 22:10:53.774848 | orchestrator | 22:10:53.774 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-22 22:10:53.774945 | orchestrator | 22:10:53.774 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-22 22:10:53.775040 | orchestrator | 22:10:53.774 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-22 22:10:53.775103 | orchestrator | 22:10:53.775 STDOUT terraform:  + directory_permission = "0777" 2025-03-22 22:10:53.775167 | orchestrator | 22:10:53.775 STDOUT terraform:  + file_permission = "0644" 2025-03-22 22:10:53.775267 | orchestrator | 22:10:53.775 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-03-22 22:10:53.775365 | orchestrator | 22:10:53.775 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.775400 | orchestrator | 22:10:53.775 STDOUT terraform:  } 2025-03-22 22:10:53.775467 | orchestrator | 22:10:53.775 STDOUT terraform:  # local_file.inventory will be created 2025-03-22 22:10:53.775531 | orchestrator | 22:10:53.775 STDOUT terraform:  + resource "local_file" "inventory" { 2025-03-22 22:10:53.775629 | orchestrator | 22:10:53.775 STDOUT terraform:  + content = (known after apply) 2025-03-22 22:10:53.775722 | orchestrator | 22:10:53.775 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-22 22:10:53.775815 | orchestrator | 22:10:53.775 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-22 22:10:53.775909 | orchestrator | 22:10:53.775 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-22 22:10:53.776005 | orchestrator | 22:10:53.775 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-22 22:10:53.776100 | orchestrator | 22:10:53.776 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-22 22:10:53.776195 | orchestrator | 22:10:53.776 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-22 22:10:53.776386 | orchestrator | 22:10:53.776 STDOUT terraform:  + directory_permission = "0777" 2025-03-22 22:10:53.776462 | orchestrator | 22:10:53.776 STDOUT terraform:  + file_permission = "0644" 2025-03-22 22:10:53.776545 | orchestrator | 22:10:53.776 STDOUT terraform:  + filename = "inventory.ci" 2025-03-22 22:10:53.776641 | orchestrator | 22:10:53.776 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.776679 | orchestrator | 22:10:53.776 STDOUT terraform:  } 2025-03-22 22:10:53.776757 | orchestrator | 22:10:53.776 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-03-22 22:10:53.776823 | orchestrator | 22:10:53.776 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-03-22 22:10:53.776885 | orchestrator | 22:10:53.776 STDOUT terraform:  + content = (sensitive value) 2025-03-22 22:10:53.776954 | orchestrator | 22:10:53.776 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-22 22:10:53.777024 | orchestrator | 22:10:53.776 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-22 22:10:53.777095 | orchestrator | 22:10:53.777 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-22 22:10:53.777166 | orchestrator | 22:10:53.777 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-22 22:10:53.777248 | orchestrator | 22:10:53.777 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-22 22:10:53.777316 | orchestrator | 22:10:53.777 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-22 22:10:53.777364 | orchestrator | 22:10:53.777 STDOUT terraform:  + directory_permission = "0700" 2025-03-22 22:10:53.777411 | orchestrator | 22:10:53.777 STDOUT terraform:  + file_permission = "0600" 2025-03-22 22:10:53.777468 | orchestrator | 22:10:53.777 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-03-22 22:10:53.777539 | orchestrator | 22:10:53.777 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.777564 | orchestrator | 22:10:53.777 STDOUT terraform:  } 2025-03-22 22:10:53.777620 | orchestrator | 22:10:53.777 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-03-22 22:10:53.777678 | orchestrator | 22:10:53.777 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-03-22 22:10:53.777718 | orchestrator | 22:10:53.777 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.777743 | orchestrator | 22:10:53.777 STDOUT terraform:  } 2025-03-22 22:10:53.777839 | orchestrator | 22:10:53.777 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-03-22 22:10:53.777931 | orchestrator | 22:10:53.777 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-03-22 22:10:53.777992 | orchestrator | 22:10:53.777 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.778052 | orchestrator | 22:10:53.777 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.778113 | orchestrator | 22:10:53.778 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.778173 | orchestrator | 22:10:53.778 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.778242 | orchestrator | 22:10:53.778 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.778319 | orchestrator | 22:10:53.778 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-03-22 22:10:53.778379 | orchestrator | 22:10:53.778 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.778433 | orchestrator | 22:10:53.778 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.778475 | orchestrator | 22:10:53.778 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.778501 | orchestrator | 22:10:53.778 STDOUT terraform:  } 2025-03-22 22:10:53.778596 | orchestrator | 22:10:53.778 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-03-22 22:10:53.778688 | orchestrator | 22:10:53.778 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-22 22:10:53.778748 | orchestrator | 22:10:53.778 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.778787 | orchestrator | 22:10:53.778 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.778850 | orchestrator | 22:10:53.778 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.778910 | orchestrator | 22:10:53.778 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.778968 | orchestrator | 22:10:53.778 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.779045 | orchestrator | 22:10:53.778 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-03-22 22:10:53.779104 | orchestrator | 22:10:53.779 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.779143 | orchestrator | 22:10:53.779 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.779183 | orchestrator | 22:10:53.779 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.779220 | orchestrator | 22:10:53.779 STDOUT terraform:  } 2025-03-22 22:10:53.779388 | orchestrator | 22:10:53.779 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-03-22 22:10:53.779471 | orchestrator | 22:10:53.779 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-22 22:10:53.779532 | orchestrator | 22:10:53.779 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.779571 | orchestrator | 22:10:53.779 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.779631 | orchestrator | 22:10:53.779 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.779691 | orchestrator | 22:10:53.779 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.779751 | orchestrator | 22:10:53.779 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.779833 | orchestrator | 22:10:53.779 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-03-22 22:10:53.779895 | orchestrator | 22:10:53.779 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.779938 | orchestrator | 22:10:53.779 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.779978 | orchestrator | 22:10:53.779 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.780002 | orchestrator | 22:10:53.779 STDOUT terraform:  } 2025-03-22 22:10:53.780096 | orchestrator | 22:10:53.780 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-03-22 22:10:53.780187 | orchestrator | 22:10:53.780 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-22 22:10:53.780274 | orchestrator | 22:10:53.780 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.780314 | orchestrator | 22:10:53.780 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.780375 | orchestrator | 22:10:53.780 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.780436 | orchestrator | 22:10:53.780 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.780497 | orchestrator | 22:10:53.780 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.780572 | orchestrator | 22:10:53.780 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-03-22 22:10:53.780640 | orchestrator | 22:10:53.780 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.780678 | orchestrator | 22:10:53.780 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.780720 | orchestrator | 22:10:53.780 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.780744 | orchestrator | 22:10:53.780 STDOUT terraform:  } 2025-03-22 22:10:53.780839 | orchestrator | 22:10:53.780 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-03-22 22:10:53.780933 | orchestrator | 22:10:53.780 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-22 22:10:53.780991 | orchestrator | 22:10:53.780 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.781029 | orchestrator | 22:10:53.780 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.781090 | orchestrator | 22:10:53.781 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.781149 | orchestrator | 22:10:53.781 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.781235 | orchestrator | 22:10:53.781 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.781295 | orchestrator | 22:10:53.781 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-03-22 22:10:53.781355 | orchestrator | 22:10:53.781 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.781393 | orchestrator | 22:10:53.781 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.781433 | orchestrator | 22:10:53.781 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.781459 | orchestrator | 22:10:53.781 STDOUT terraform:  } 2025-03-22 22:10:53.781551 | orchestrator | 22:10:53.781 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-03-22 22:10:53.781641 | orchestrator | 22:10:53.781 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-22 22:10:53.781701 | orchestrator | 22:10:53.781 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.781742 | orchestrator | 22:10:53.781 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.781802 | orchestrator | 22:10:53.781 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.781862 | orchestrator | 22:10:53.781 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.781921 | orchestrator | 22:10:53.781 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.781997 | orchestrator | 22:10:53.781 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-03-22 22:10:53.782900 | orchestrator | 22:10:53.781 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.782980 | orchestrator | 22:10:53.782 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.783061 | orchestrator | 22:10:53.783 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.783088 | orchestrator | 22:10:53.783 STDOUT terraform:  } 2025-03-22 22:10:53.783252 | orchestrator | 22:10:53.783 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-03-22 22:10:53.783315 | orchestrator | 22:10:53.783 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-22 22:10:53.783351 | orchestrator | 22:10:53.783 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.783371 | orchestrator | 22:10:53.783 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.783401 | orchestrator | 22:10:53.783 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.783431 | orchestrator | 22:10:53.783 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.783461 | orchestrator | 22:10:53.783 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.783498 | orchestrator | 22:10:53.783 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-03-22 22:10:53.783528 | orchestrator | 22:10:53.783 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.783552 | orchestrator | 22:10:53.783 STDOUT terraform:  + size = 80 2025-03-22 22:10:53.783560 | orchestrator | 22:10:53.783 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.783579 | orchestrator | 22:10:53.783 STDOUT terraform:  } 2025-03-22 22:10:53.783621 | orchestrator | 22:10:53.783 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-03-22 22:10:53.783663 | orchestrator | 22:10:53.783 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.783693 | orchestrator | 22:10:53.783 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.783712 | orchestrator | 22:10:53.783 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.783743 | orchestrator | 22:10:53.783 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.783770 | orchestrator | 22:10:53.783 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.783806 | orchestrator | 22:10:53.783 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-03-22 22:10:53.783836 | orchestrator | 22:10:53.783 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.783860 | orchestrator | 22:10:53.783 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.783873 | orchestrator | 22:10:53.783 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.783924 | orchestrator | 22:10:53.783 STDOUT terraform:  } 2025-03-22 22:10:53.783931 | orchestrator | 22:10:53.783 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-03-22 22:10:53.783966 | orchestrator | 22:10:53.783 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.783996 | orchestrator | 22:10:53.783 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.784019 | orchestrator | 22:10:53.783 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.784045 | orchestrator | 22:10:53.784 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.784076 | orchestrator | 22:10:53.784 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.784114 | orchestrator | 22:10:53.784 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-03-22 22:10:53.784143 | orchestrator | 22:10:53.784 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.784152 | orchestrator | 22:10:53.784 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.784180 | orchestrator | 22:10:53.784 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.784352 | orchestrator | 22:10:53.784 STDOUT terraform:  } 2025-03-22 22:10:53.784452 | orchestrator | 22:10:53.784 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-03-22 22:10:53.784475 | orchestrator | 22:10:53.784 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.784490 | orchestrator | 22:10:53.784 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.784504 | orchestrator | 22:10:53.784 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.784518 | orchestrator | 22:10:53.784 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.784535 | orchestrator | 22:10:53.784 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.784549 | orchestrator | 22:10:53.784 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-03-22 22:10:53.784563 | orchestrator | 22:10:53.784 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.784579 | orchestrator | 22:10:53.784 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.784597 | orchestrator | 22:10:53.784 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.784731 | orchestrator | 22:10:53.784 STDOUT terraform:  } 2025-03-22 22:10:53.784755 | orchestrator | 22:10:53.784 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-03-22 22:10:53.784773 | orchestrator | 22:10:53.784 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.784816 | orchestrator | 22:10:53.784 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.784835 | orchestrator | 22:10:53.784 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.784879 | orchestrator | 22:10:53.784 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.784900 | orchestrator | 22:10:53.784 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.784939 | orchestrator | 22:10:53.784 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-03-22 22:10:53.784976 | orchestrator | 22:10:53.784 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.784994 | orchestrator | 22:10:53.784 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.785012 | orchestrator | 22:10:53.784 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.785029 | orchestrator | 22:10:53.785 STDOUT terraform:  } 2025-03-22 22:10:53.785078 | orchestrator | 22:10:53.785 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-03-22 22:10:53.785137 | orchestrator | 22:10:53.785 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.785156 | orchestrator | 22:10:53.785 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.785174 | orchestrator | 22:10:53.785 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.785234 | orchestrator | 22:10:53.785 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.785254 | orchestrator | 22:10:53.785 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.785286 | orchestrator | 22:10:53.785 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-03-22 22:10:53.785321 | orchestrator | 22:10:53.785 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.785339 | orchestrator | 22:10:53.785 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.785357 | orchestrator | 22:10:53.785 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.785375 | orchestrator | 22:10:53.785 STDOUT terraform:  } 2025-03-22 22:10:53.785424 | orchestrator | 22:10:53.785 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-03-22 22:10:53.785463 | orchestrator | 22:10:53.785 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.785481 | orchestrator | 22:10:53.785 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.785514 | orchestrator | 22:10:53.785 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.785546 | orchestrator | 22:10:53.785 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.785586 | orchestrator | 22:10:53.785 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.785619 | orchestrator | 22:10:53.785 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-03-22 22:10:53.785654 | orchestrator | 22:10:53.785 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.785672 | orchestrator | 22:10:53.785 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.785689 | orchestrator | 22:10:53.785 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.785707 | orchestrator | 22:10:53.785 STDOUT terraform:  } 2025-03-22 22:10:53.785749 | orchestrator | 22:10:53.785 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-03-22 22:10:53.785796 | orchestrator | 22:10:53.785 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.785827 | orchestrator | 22:10:53.785 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.785855 | orchestrator | 22:10:53.785 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.785872 | orchestrator | 22:10:53.785 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.785908 | orchestrator | 22:10:53.785 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.785951 | orchestrator | 22:10:53.785 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-03-22 22:10:53.785985 | orchestrator | 22:10:53.785 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.786003 | orchestrator | 22:10:53.785 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.786058 | orchestrator | 22:10:53.785 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.786092 | orchestrator | 22:10:53.786 STDOUT terraform:  } 2025-03-22 22:10:53.786111 | orchestrator | 22:10:53.786 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-03-22 22:10:53.786128 | orchestrator | 22:10:53.786 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.786170 | orchestrator | 22:10:53.786 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.786188 | orchestrator | 22:10:53.786 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.786237 | orchestrator | 22:10:53.786 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.786275 | orchestrator | 22:10:53.786 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.786315 | orchestrator | 22:10:53.786 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-03-22 22:10:53.786349 | orchestrator | 22:10:53.786 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.786367 | orchestrator | 22:10:53.786 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.786386 | orchestrator | 22:10:53.786 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.786404 | orchestrator | 22:10:53.786 STDOUT terraform:  } 2025-03-22 22:10:53.786447 | orchestrator | 22:10:53.786 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-03-22 22:10:53.786491 | orchestrator | 22:10:53.786 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.786526 | orchestrator | 22:10:53.786 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.786552 | orchestrator | 22:10:53.786 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.786570 | orchestrator | 22:10:53.786 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.786611 | orchestrator | 22:10:53.786 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.786652 | orchestrator | 22:10:53.786 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-03-22 22:10:53.786685 | orchestrator | 22:10:53.786 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.786702 | orchestrator | 22:10:53.786 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.786720 | orchestrator | 22:10:53.786 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.786737 | orchestrator | 22:10:53.786 STDOUT terraform:  } 2025-03-22 22:10:53.786782 | orchestrator | 22:10:53.786 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-03-22 22:10:53.786829 | orchestrator | 22:10:53.786 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.786861 | orchestrator | 22:10:53.786 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.786879 | orchestrator | 22:10:53.786 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.786915 | orchestrator | 22:10:53.786 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.786948 | orchestrator | 22:10:53.786 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.786989 | orchestrator | 22:10:53.786 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-03-22 22:10:53.787022 | orchestrator | 22:10:53.786 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.787039 | orchestrator | 22:10:53.787 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.787056 | orchestrator | 22:10:53.787 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.790840 | orchestrator | 22:10:53.787 STDOUT terraform:  } 2025-03-22 22:10:53.790911 | orchestrator | 22:10:53.790 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-03-22 22:10:53.793247 | orchestrator | 22:10:53.790 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793268 | orchestrator | 22:10:53.790 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793279 | orchestrator | 22:10:53.790 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793289 | orchestrator | 22:10:53.790 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.793303 | orchestrator | 22:10:53.791 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.793313 | orchestrator | 22:10:53.791 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-03-22 22:10:53.793323 | orchestrator | 22:10:53.791 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.793333 | orchestrator | 22:10:53.791 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.793344 | orchestrator | 22:10:53.791 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.793354 | orchestrator | 22:10:53.791 STDOUT terraform:  } 2025-03-22 22:10:53.793364 | orchestrator | 22:10:53.791 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-03-22 22:10:53.793374 | orchestrator | 22:10:53.791 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793385 | orchestrator | 22:10:53.791 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793395 | orchestrator | 22:10:53.791 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793405 | orchestrator | 22:10:53.791 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.793415 | orchestrator | 22:10:53.791 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.793425 | orchestrator | 22:10:53.791 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-03-22 22:10:53.793448 | orchestrator | 22:10:53.791 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.793458 | orchestrator | 22:10:53.791 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.793468 | orchestrator | 22:10:53.791 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.793479 | orchestrator | 22:10:53.791 STDOUT terraform:  } 2025-03-22 22:10:53.793489 | orchestrator | 22:10:53.791 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-03-22 22:10:53.793499 | orchestrator | 22:10:53.791 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793509 | orchestrator | 22:10:53.791 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793519 | orchestrator | 22:10:53.791 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793529 | orchestrator | 22:10:53.791 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.793539 | orchestrator | 22:10:53.791 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.793549 | orchestrator | 22:10:53.791 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-03-22 22:10:53.793559 | orchestrator | 22:10:53.791 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.793569 | orchestrator | 22:10:53.791 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.793579 | orchestrator | 22:10:53.791 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.793589 | orchestrator | 22:10:53.791 STDOUT terraform:  } 2025-03-22 22:10:53.793599 | orchestrator | 22:10:53.791 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-03-22 22:10:53.793609 | orchestrator | 22:10:53.791 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793620 | orchestrator | 22:10:53.791 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793630 | orchestrator | 22:10:53.791 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793640 | orchestrator | 22:10:53.791 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.793657 | orchestrator | 22:10:53.791 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.793668 | orchestrator | 22:10:53.792 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-03-22 22:10:53.793679 | orchestrator | 22:10:53.792 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.793689 | orchestrator | 22:10:53.792 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.793699 | orchestrator | 22:10:53.792 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.793710 | orchestrator | 22:10:53.792 STDOUT terraform:  } 2025-03-22 22:10:53.793720 | orchestrator | 22:10:53.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-03-22 22:10:53.793730 | orchestrator | 22:10:53.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793740 | orchestrator | 22:10:53.792 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793750 | orchestrator | 22:10:53.792 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793765 | orchestrator | 22:10:53.792 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.793776 | orchestrator | 22:10:53.792 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.793786 | orchestrator | 22:10:53.792 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-03-22 22:10:53.793796 | orchestrator | 22:10:53.792 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.793806 | orchestrator | 22:10:53.792 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.793816 | orchestrator | 22:10:53.792 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.793826 | orchestrator | 22:10:53.792 STDOUT terraform:  } 2025-03-22 22:10:53.793837 | orchestrator | 22:10:53.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-03-22 22:10:53.793847 | orchestrator | 22:10:53.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793857 | orchestrator | 22:10:53.792 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793867 | orchestrator | 22:10:53.792 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793877 | orchestrator | 22:10:53.792 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.793890 | orchestrator | 22:10:53.792 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.793900 | orchestrator | 22:10:53.792 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-03-22 22:10:53.793910 | orchestrator | 22:10:53.792 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.793920 | orchestrator | 22:10:53.792 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.793931 | orchestrator | 22:10:53.792 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.793941 | orchestrator | 22:10:53.792 STDOUT terraform:  } 2025-03-22 22:10:53.793951 | orchestrator | 22:10:53.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-03-22 22:10:53.793961 | orchestrator | 22:10:53.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.793972 | orchestrator | 22:10:53.792 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.793982 | orchestrator | 22:10:53.792 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.793992 | orchestrator | 22:10:53.792 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.794003 | orchestrator | 22:10:53.792 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.794034 | orchestrator | 22:10:53.792 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-03-22 22:10:53.794047 | orchestrator | 22:10:53.792 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.794057 | orchestrator | 22:10:53.793 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.794067 | orchestrator | 22:10:53.793 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.794082 | orchestrator | 22:10:53.793 STDOUT terraform:  } 2025-03-22 22:10:53.794392 | orchestrator | 22:10:53.793 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-03-22 22:10:53.794417 | orchestrator | 22:10:53.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-22 22:10:53.794426 | orchestrator | 22:10:53.793 STDOUT terraform:  + attachment = (known after apply) 2025-03-22 22:10:53.794434 | orchestrator | 22:10:53.793 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.794444 | orchestrator | 22:10:53.793 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.794453 | orchestrator | 22:10:53.793 STDOUT terraform:  + metadata = (known after apply) 2025-03-22 22:10:53.794461 | orchestrator | 22:10:53.793 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-03-22 22:10:53.794470 | orchestrator | 22:10:53.793 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.794478 | orchestrator | 22:10:53.793 STDOUT terraform:  + size = 20 2025-03-22 22:10:53.794487 | orchestrator | 22:10:53.793 STDOUT terraform:  + volume_type = "ssd" 2025-03-22 22:10:53.794496 | orchestrator | 22:10:53.793 STDOUT terraform:  } 2025-03-22 22:10:53.794507 | orchestrator | 22:10:53.793 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-03-22 22:10:53.794516 | orchestrator | 22:10:53.793 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-03-22 22:10:53.794524 | orchestrator | 22:10:53.793 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.794533 | orchestrator | 22:10:53.793 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.794541 | orchestrator | 22:10:53.793 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.794550 | orchestrator | 22:10:53.793 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.794558 | orchestrator | 22:10:53.793 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.794567 | orchestrator | 22:10:53.793 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.794576 | orchestrator | 22:10:53.793 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.794584 | orchestrator | 22:10:53.793 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.794593 | orchestrator | 22:10:53.793 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-03-22 22:10:53.794601 | orchestrator | 22:10:53.793 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.794610 | orchestrator | 22:10:53.793 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.794619 | orchestrator | 22:10:53.793 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.794627 | orchestrator | 22:10:53.793 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.794636 | orchestrator | 22:10:53.793 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.794644 | orchestrator | 22:10:53.793 STDOUT terraform:  + name = "testbed-manager" 2025-03-22 22:10:53.794653 | orchestrator | 22:10:53.793 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.794662 | orchestrator | 22:10:53.793 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.794670 | orchestrator | 22:10:53.793 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.794683 | orchestrator | 22:10:53.794 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.794692 | orchestrator | 22:10:53.794 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.794705 | orchestrator | 22:10:53.794 STDOUT terraform:  + user_data = (known after apply) 2025-03-22 22:10:53.794739 | orchestrator | 22:10:53.794 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.794749 | orchestrator | 22:10:53.794 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.794757 | orchestrator | 22:10:53.794 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.794766 | orchestrator | 22:10:53.794 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.794775 | orchestrator | 22:10:53.794 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.794783 | orchestrator | 22:10:53.794 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.794792 | orchestrator | 22:10:53.794 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.794805 | orchestrator | 22:10:53.794 STDOUT terraform:  } 2025-03-22 22:10:53.794814 | orchestrator | 22:10:53.794 STDOUT terraform:  + network { 2025-03-22 22:10:53.794823 | orchestrator | 22:10:53.794 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.794831 | orchestrator | 22:10:53.794 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.794840 | orchestrator | 22:10:53.794 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.794849 | orchestrator | 22:10:53.794 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.794857 | orchestrator | 22:10:53.794 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.794868 | orchestrator | 22:10:53.794 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.794907 | orchestrator | 22:10:53.794 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.794917 | orchestrator | 22:10:53.794 STDOUT terraform:  } 2025-03-22 22:10:53.794925 | orchestrator | 22:10:53.794 STDOUT terraform:  } 2025-03-22 22:10:53.794934 | orchestrator | 22:10:53.794 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-03-22 22:10:53.794943 | orchestrator | 22:10:53.794 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-22 22:10:53.794954 | orchestrator | 22:10:53.794 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.794971 | orchestrator | 22:10:53.794 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.794983 | orchestrator | 22:10:53.794 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.795013 | orchestrator | 22:10:53.794 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.795033 | orchestrator | 22:10:53.795 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.795075 | orchestrator | 22:10:53.795 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.795086 | orchestrator | 22:10:53.795 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.795148 | orchestrator | 22:10:53.795 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.795161 | orchestrator | 22:10:53.795 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-22 22:10:53.795172 | orchestrator | 22:10:53.795 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.795240 | orchestrator | 22:10:53.795 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.795254 | orchestrator | 22:10:53.795 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.795300 | orchestrator | 22:10:53.795 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.795311 | orchestrator | 22:10:53.795 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.795354 | orchestrator | 22:10:53.795 STDOUT terraform:  + name = "testbed-node-0" 2025-03-22 22:10:53.795365 | orchestrator | 22:10:53.795 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.795412 | orchestrator | 22:10:53.795 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.795443 | orchestrator | 22:10:53.795 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.795454 | orchestrator | 22:10:53.795 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.795498 | orchestrator | 22:10:53.795 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.795548 | orchestrator | 22:10:53.795 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-22 22:10:53.795584 | orchestrator | 22:10:53.795 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.795597 | orchestrator | 22:10:53.795 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.795637 | orchestrator | 22:10:53.795 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.795649 | orchestrator | 22:10:53.795 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.795691 | orchestrator | 22:10:53.795 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.795703 | orchestrator | 22:10:53.795 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.795714 | orchestrator | 22:10:53.795 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.795725 | orchestrator | 22:10:53.795 STDOUT terraform:  } 2025-03-22 22:10:53.795735 | orchestrator | 22:10:53.795 STDOUT terraform:  + network { 2025-03-22 22:10:53.795766 | orchestrator | 22:10:53.795 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.795796 | orchestrator | 22:10:53.795 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.795807 | orchestrator | 22:10:53.795 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.795852 | orchestrator | 22:10:53.795 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.795870 | orchestrator | 22:10:53.795 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.795914 | orchestrator | 22:10:53.795 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.795925 | orchestrator | 22:10:53.795 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.795941 | orchestrator | 22:10:53.795 STDOUT terraform:  } 2025-03-22 22:10:53.796010 | orchestrator | 22:10:53.795 STDOUT terraform:  } 2025-03-22 22:10:53.796023 | orchestrator | 22:10:53.795 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-03-22 22:10:53.796034 | orchestrator | 22:10:53.795 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-22 22:10:53.796079 | orchestrator | 22:10:53.796 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.796110 | orchestrator | 22:10:53.796 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.796140 | orchestrator | 22:10:53.796 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.796170 | orchestrator | 22:10:53.796 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.796182 | orchestrator | 22:10:53.796 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.796249 | orchestrator | 22:10:53.796 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.796279 | orchestrator | 22:10:53.796 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.796291 | orchestrator | 22:10:53.796 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.796305 | orchestrator | 22:10:53.796 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-22 22:10:53.796316 | orchestrator | 22:10:53.796 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.796363 | orchestrator | 22:10:53.796 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.796394 | orchestrator | 22:10:53.796 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.796424 | orchestrator | 22:10:53.796 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.796435 | orchestrator | 22:10:53.796 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.796477 | orchestrator | 22:10:53.796 STDOUT terraform:  + name = "testbed-node-1" 2025-03-22 22:10:53.796489 | orchestrator | 22:10:53.796 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.796531 | orchestrator | 22:10:53.796 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.796560 | orchestrator | 22:10:53.796 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.796570 | orchestrator | 22:10:53.796 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.796615 | orchestrator | 22:10:53.796 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.796663 | orchestrator | 22:10:53.796 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-22 22:10:53.796697 | orchestrator | 22:10:53.796 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.796708 | orchestrator | 22:10:53.796 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.796719 | orchestrator | 22:10:53.796 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.796753 | orchestrator | 22:10:53.796 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.796788 | orchestrator | 22:10:53.796 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.796804 | orchestrator | 22:10:53.796 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.796846 | orchestrator | 22:10:53.796 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.796855 | orchestrator | 22:10:53.796 STDOUT terraform:  } 2025-03-22 22:10:53.796864 | orchestrator | 22:10:53.796 STDOUT terraform:  + network { 2025-03-22 22:10:53.796874 | orchestrator | 22:10:53.796 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.796913 | orchestrator | 22:10:53.796 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.796942 | orchestrator | 22:10:53.796 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.796970 | orchestrator | 22:10:53.796 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.796998 | orchestrator | 22:10:53.796 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.797026 | orchestrator | 22:10:53.796 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.797054 | orchestrator | 22:10:53.797 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.797063 | orchestrator | 22:10:53.797 STDOUT terraform:  } 2025-03-22 22:10:53.797073 | orchestrator | 22:10:53.797 STDOUT terraform:  } 2025-03-22 22:10:53.797115 | orchestrator | 22:10:53.797 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-03-22 22:10:53.797156 | orchestrator | 22:10:53.797 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-22 22:10:53.797191 | orchestrator | 22:10:53.797 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.797234 | orchestrator | 22:10:53.797 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.797273 | orchestrator | 22:10:53.797 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.797303 | orchestrator | 22:10:53.797 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.797313 | orchestrator | 22:10:53.797 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.797347 | orchestrator | 22:10:53.797 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.797382 | orchestrator | 22:10:53.797 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.797416 | orchestrator | 22:10:53.797 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.797427 | orchestrator | 22:10:53.797 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-22 22:10:53.797455 | orchestrator | 22:10:53.797 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.797491 | orchestrator | 22:10:53.797 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.797526 | orchestrator | 22:10:53.797 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.797559 | orchestrator | 22:10:53.797 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.797570 | orchestrator | 22:10:53.797 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.797611 | orchestrator | 22:10:53.797 STDOUT terraform:  + name = "testbed-node-2" 2025-03-22 22:10:53.797626 | orchestrator | 22:10:53.797 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.797666 | orchestrator | 22:10:53.797 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.797700 | orchestrator | 22:10:53.797 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.797711 | orchestrator | 22:10:53.797 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.797754 | orchestrator | 22:10:53.797 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.797802 | orchestrator | 22:10:53.797 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-22 22:10:53.797836 | orchestrator | 22:10:53.797 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.797847 | orchestrator | 22:10:53.797 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.797889 | orchestrator | 22:10:53.797 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.797899 | orchestrator | 22:10:53.797 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.797909 | orchestrator | 22:10:53.797 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.797943 | orchestrator | 22:10:53.797 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.797982 | orchestrator | 22:10:53.797 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.797991 | orchestrator | 22:10:53.797 STDOUT terraform:  } 2025-03-22 22:10:53.798000 | orchestrator | 22:10:53.797 STDOUT terraform:  + network { 2025-03-22 22:10:53.798010 | orchestrator | 22:10:53.797 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.798065 | orchestrator | 22:10:53.798 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.798094 | orchestrator | 22:10:53.798 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.798122 | orchestrator | 22:10:53.798 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.798139 | orchestrator | 22:10:53.798 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.798179 | orchestrator | 22:10:53.798 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.798207 | orchestrator | 22:10:53.798 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.798247 | orchestrator | 22:10:53.798 STDOUT terraform:  } 2025-03-22 22:10:53.798258 | orchestrator | 22:10:53.798 STDOUT terraform:  } 2025-03-22 22:10:53.798292 | orchestrator | 22:10:53.798 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-03-22 22:10:53.798333 | orchestrator | 22:10:53.798 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-22 22:10:53.798367 | orchestrator | 22:10:53.798 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.798401 | orchestrator | 22:10:53.798 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.798436 | orchestrator | 22:10:53.798 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.798470 | orchestrator | 22:10:53.798 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.798489 | orchestrator | 22:10:53.798 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.798499 | orchestrator | 22:10:53.798 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.798545 | orchestrator | 22:10:53.798 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.798573 | orchestrator | 22:10:53.798 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.798603 | orchestrator | 22:10:53.798 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-22 22:10:53.798614 | orchestrator | 22:10:53.798 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.798655 | orchestrator | 22:10:53.798 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.798691 | orchestrator | 22:10:53.798 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.798720 | orchestrator | 22:10:53.798 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.798730 | orchestrator | 22:10:53.798 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.798771 | orchestrator | 22:10:53.798 STDOUT terraform:  + name = "testbed-node-3" 2025-03-22 22:10:53.798782 | orchestrator | 22:10:53.798 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.798826 | orchestrator | 22:10:53.798 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.798860 | orchestrator | 22:10:53.798 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.798871 | orchestrator | 22:10:53.798 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.798913 | orchestrator | 22:10:53.798 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.798961 | orchestrator | 22:10:53.798 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-22 22:10:53.798995 | orchestrator | 22:10:53.798 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.799006 | orchestrator | 22:10:53.798 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.799047 | orchestrator | 22:10:53.798 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.799058 | orchestrator | 22:10:53.799 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.799068 | orchestrator | 22:10:53.799 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.799101 | orchestrator | 22:10:53.799 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.799139 | orchestrator | 22:10:53.799 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.799148 | orchestrator | 22:10:53.799 STDOUT terraform:  } 2025-03-22 22:10:53.799158 | orchestrator | 22:10:53.799 STDOUT terraform:  + network { 2025-03-22 22:10:53.799167 | orchestrator | 22:10:53.799 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.799206 | orchestrator | 22:10:53.799 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.799255 | orchestrator | 22:10:53.799 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.799283 | orchestrator | 22:10:53.799 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.799301 | orchestrator | 22:10:53.799 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.799340 | orchestrator | 22:10:53.799 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.799368 | orchestrator | 22:10:53.799 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.799376 | orchestrator | 22:10:53.799 STDOUT terraform:  } 2025-03-22 22:10:53.799386 | orchestrator | 22:10:53.799 STDOUT terraform:  } 2025-03-22 22:10:53.799429 | orchestrator | 22:10:53.799 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-03-22 22:10:53.799470 | orchestrator | 22:10:53.799 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-22 22:10:53.799505 | orchestrator | 22:10:53.799 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.799540 | orchestrator | 22:10:53.799 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.799574 | orchestrator | 22:10:53.799 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.799609 | orchestrator | 22:10:53.799 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.799620 | orchestrator | 22:10:53.799 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.799646 | orchestrator | 22:10:53.799 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.799680 | orchestrator | 22:10:53.799 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.799715 | orchestrator | 22:10:53.799 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.799742 | orchestrator | 22:10:53.799 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-22 22:10:53.799752 | orchestrator | 22:10:53.799 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.799793 | orchestrator | 22:10:53.799 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.799828 | orchestrator | 22:10:53.799 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.799862 | orchestrator | 22:10:53.799 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.799873 | orchestrator | 22:10:53.799 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.799912 | orchestrator | 22:10:53.799 STDOUT terraform:  + name = "testbed-node-4" 2025-03-22 22:10:53.799923 | orchestrator | 22:10:53.799 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.799967 | orchestrator | 22:10:53.799 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.800000 | orchestrator | 22:10:53.799 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.800011 | orchestrator | 22:10:53.799 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.800053 | orchestrator | 22:10:53.800 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.800128 | orchestrator | 22:10:53.800 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-22 22:10:53.800156 | orchestrator | 22:10:53.800 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.800167 | orchestrator | 22:10:53.800 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.800207 | orchestrator | 22:10:53.800 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.800234 | orchestrator | 22:10:53.800 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.800267 | orchestrator | 22:10:53.800 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.800278 | orchestrator | 22:10:53.800 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.800305 | orchestrator | 22:10:53.800 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.800314 | orchestrator | 22:10:53.800 STDOUT terraform:  } 2025-03-22 22:10:53.800324 | orchestrator | 22:10:53.800 STDOUT terraform:  + network { 2025-03-22 22:10:53.800334 | orchestrator | 22:10:53.800 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.800406 | orchestrator | 22:10:53.800 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.800417 | orchestrator | 22:10:53.800 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.800427 | orchestrator | 22:10:53.800 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.800456 | orchestrator | 22:10:53.800 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.800484 | orchestrator | 22:10:53.800 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.800511 | orchestrator | 22:10:53.800 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.800520 | orchestrator | 22:10:53.800 STDOUT terraform:  } 2025-03-22 22:10:53.800530 | orchestrator | 22:10:53.800 STDOUT terraform:  } 2025-03-22 22:10:53.800574 | orchestrator | 22:10:53.800 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-03-22 22:10:53.800615 | orchestrator | 22:10:53.800 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-22 22:10:53.800650 | orchestrator | 22:10:53.800 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-22 22:10:53.800683 | orchestrator | 22:10:53.800 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-22 22:10:53.800717 | orchestrator | 22:10:53.800 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-22 22:10:53.800751 | orchestrator | 22:10:53.800 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.800762 | orchestrator | 22:10:53.800 STDOUT terraform:  + availability_zone = "nova" 2025-03-22 22:10:53.800789 | orchestrator | 22:10:53.800 STDOUT terraform:  + config_drive = true 2025-03-22 22:10:53.800817 | orchestrator | 22:10:53.800 STDOUT terraform:  + created = (known after apply) 2025-03-22 22:10:53.800853 | orchestrator | 22:10:53.800 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-22 22:10:53.800883 | orchestrator | 22:10:53.800 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-22 22:10:53.800894 | orchestrator | 22:10:53.800 STDOUT terraform:  + force_delete = false 2025-03-22 22:10:53.800931 | orchestrator | 22:10:53.800 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.800964 | orchestrator | 22:10:53.800 STDOUT terraform:  + image_id = (known after apply) 2025-03-22 22:10:53.800994 | orchestrator | 22:10:53.800 STDOUT terraform:  + image_name = (known after apply) 2025-03-22 22:10:53.801019 | orchestrator | 22:10:53.800 STDOUT terraform:  + key_pair = "testbed" 2025-03-22 22:10:53.801043 | orchestrator | 22:10:53.801 STDOUT terraform:  + name = "testbed-node-5" 2025-03-22 22:10:53.801067 | orchestrator | 22:10:53.801 STDOUT terraform:  + power_state = "active" 2025-03-22 22:10:53.801102 | orchestrator | 22:10:53.801 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.801136 | orchestrator | 22:10:53.801 STDOUT terraform:  + security_groups = (known after apply) 2025-03-22 22:10:53.801146 | orchestrator | 22:10:53.801 STDOUT terraform:  + stop_before_destroy = false 2025-03-22 22:10:53.801190 | orchestrator | 22:10:53.801 STDOUT terraform:  + updated = (known after apply) 2025-03-22 22:10:53.801245 | orchestrator | 22:10:53.801 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-22 22:10:53.801278 | orchestrator | 22:10:53.801 STDOUT terraform:  + block_device { 2025-03-22 22:10:53.801289 | orchestrator | 22:10:53.801 STDOUT terraform:  + boot_index = 0 2025-03-22 22:10:53.801299 | orchestrator | 22:10:53.801 STDOUT terraform:  + delete_on_termination = false 2025-03-22 22:10:53.801334 | orchestrator | 22:10:53.801 STDOUT terraform:  + destination_type = "volume" 2025-03-22 22:10:53.801359 | orchestrator | 22:10:53.801 STDOUT terraform:  + multiattach = false 2025-03-22 22:10:53.801384 | orchestrator | 22:10:53.801 STDOUT terraform:  + source_type = "volume" 2025-03-22 22:10:53.801421 | orchestrator | 22:10:53.801 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.801430 | orchestrator | 22:10:53.801 STDOUT terraform:  } 2025-03-22 22:10:53.801439 | orchestrator | 22:10:53.801 STDOUT terraform:  + network { 2025-03-22 22:10:53.801449 | orchestrator | 22:10:53.801 STDOUT terraform:  + access_network = false 2025-03-22 22:10:53.801489 | orchestrator | 22:10:53.801 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-22 22:10:53.801514 | orchestrator | 22:10:53.801 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-22 22:10:53.801545 | orchestrator | 22:10:53.801 STDOUT terraform:  + mac = (known after apply) 2025-03-22 22:10:53.801576 | orchestrator | 22:10:53.801 STDOUT terraform:  + name = (known after apply) 2025-03-22 22:10:53.801606 | orchestrator | 22:10:53.801 STDOUT terraform:  + port = (known after apply) 2025-03-22 22:10:53.801637 | orchestrator | 22:10:53.801 STDOUT terraform:  + uuid = (known after apply) 2025-03-22 22:10:53.801645 | orchestrator | 22:10:53.801 STDOUT terraform:  } 2025-03-22 22:10:53.801655 | orchestrator | 22:10:53.801 STDOUT terraform:  } 2025-03-22 22:10:53.801693 | orchestrator | 22:10:53.801 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-03-22 22:10:53.801719 | orchestrator | 22:10:53.801 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-03-22 22:10:53.801744 | orchestrator | 22:10:53.801 STDOUT terraform:  + fingerprint = (known after apply) 2025-03-22 22:10:53.801769 | orchestrator | 22:10:53.801 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.801784 | orchestrator | 22:10:53.801 STDOUT terraform:  + name = "testbed" 2025-03-22 22:10:53.801809 | orchestrator | 22:10:53.801 STDOUT terraform:  + private_key = (sensitive value) 2025-03-22 22:10:53.801834 | orchestrator | 22:10:53.801 STDOUT terraform:  + public_key = (known after apply) 2025-03-22 22:10:53.801865 | orchestrator | 22:10:53.801 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.801876 | orchestrator | 22:10:53.801 STDOUT terraform:  + user_id = (known after apply) 2025-03-22 22:10:53.801885 | orchestrator | 22:10:53.801 STDOUT terraform:  } 2025-03-22 22:10:53.801943 | orchestrator | 22:10:53.801 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-03-22 22:10:53.801990 | orchestrator | 22:10:53.801 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.802027 | orchestrator | 22:10:53.801 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.802038 | orchestrator | 22:10:53.802 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.802070 | orchestrator | 22:10:53.802 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.802097 | orchestrator | 22:10:53.802 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.802124 | orchestrator | 22:10:53.802 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.802133 | orchestrator | 22:10:53.802 STDOUT terraform:  } 2025-03-22 22:10:53.802182 | orchestrator | 22:10:53.802 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-03-22 22:10:53.802271 | orchestrator | 22:10:53.802 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.802301 | orchestrator | 22:10:53.802 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.802311 | orchestrator | 22:10:53.802 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.802321 | orchestrator | 22:10:53.802 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.802354 | orchestrator | 22:10:53.802 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.802383 | orchestrator | 22:10:53.802 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.802394 | orchestrator | 22:10:53.802 STDOUT terraform:  } 2025-03-22 22:10:53.802439 | orchestrator | 22:10:53.802 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-03-22 22:10:53.802487 | orchestrator | 22:10:53.802 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.802514 | orchestrator | 22:10:53.802 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.802543 | orchestrator | 22:10:53.802 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.802570 | orchestrator | 22:10:53.802 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.802598 | orchestrator | 22:10:53.802 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.802626 | orchestrator | 22:10:53.802 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.802641 | orchestrator | 22:10:53.802 STDOUT terraform:  } 2025-03-22 22:10:53.802686 | orchestrator | 22:10:53.802 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-03-22 22:10:53.802734 | orchestrator | 22:10:53.802 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.802760 | orchestrator | 22:10:53.802 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.802789 | orchestrator | 22:10:53.802 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.802814 | orchestrator | 22:10:53.802 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.802842 | orchestrator | 22:10:53.802 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.802869 | orchestrator | 22:10:53.802 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.802878 | orchestrator | 22:10:53.802 STDOUT terraform:  } 2025-03-22 22:10:53.802927 | orchestrator | 22:10:53.802 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-03-22 22:10:53.802975 | orchestrator | 22:10:53.802 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.803002 | orchestrator | 22:10:53.802 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.803031 | orchestrator | 22:10:53.802 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.803057 | orchestrator | 22:10:53.803 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.803086 | orchestrator | 22:10:53.803 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.803113 | orchestrator | 22:10:53.803 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.803122 | orchestrator | 22:10:53.803 STDOUT terraform:  } 2025-03-22 22:10:53.803171 | orchestrator | 22:10:53.803 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-03-22 22:10:53.803224 | orchestrator | 22:10:53.803 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.803251 | orchestrator | 22:10:53.803 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.803299 | orchestrator | 22:10:53.803 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.803326 | orchestrator | 22:10:53.803 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.803355 | orchestrator | 22:10:53.803 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.803382 | orchestrator | 22:10:53.803 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.803391 | orchestrator | 22:10:53.803 STDOUT terraform:  } 2025-03-22 22:10:53.803441 | orchestrator | 22:10:53.803 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-03-22 22:10:53.803489 | orchestrator | 22:10:53.803 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.803516 | orchestrator | 22:10:53.803 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.803545 | orchestrator | 22:10:53.803 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.803573 | orchestrator | 22:10:53.803 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.803601 | orchestrator | 22:10:53.803 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.803628 | orchestrator | 22:10:53.803 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.803637 | orchestrator | 22:10:53.803 STDOUT terraform:  } 2025-03-22 22:10:53.803685 | orchestrator | 22:10:53.803 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-03-22 22:10:53.803733 | orchestrator | 22:10:53.803 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.803760 | orchestrator | 22:10:53.803 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.803788 | orchestrator | 22:10:53.803 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.803814 | orchestrator | 22:10:53.803 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.803844 | orchestrator | 22:10:53.803 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.803869 | orchestrator | 22:10:53.803 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.803879 | orchestrator | 22:10:53.803 STDOUT terraform:  } 2025-03-22 22:10:53.803928 | orchestrator | 22:10:53.803 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-03-22 22:10:53.803976 | orchestrator | 22:10:53.803 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.804003 | orchestrator | 22:10:53.803 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.804031 | orchestrator | 22:10:53.803 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.804057 | orchestrator | 22:10:53.804 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.804086 | orchestrator | 22:10:53.804 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.804113 | orchestrator | 22:10:53.804 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.804122 | orchestrator | 22:10:53.804 STDOUT terraform:  } 2025-03-22 22:10:53.804170 | orchestrator | 22:10:53.804 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-03-22 22:10:53.804223 | orchestrator | 22:10:53.804 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.804250 | orchestrator | 22:10:53.804 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.804277 | orchestrator | 22:10:53.804 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.804304 | orchestrator | 22:10:53.804 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.804333 | orchestrator | 22:10:53.804 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.804359 | orchestrator | 22:10:53.804 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.804368 | orchestrator | 22:10:53.804 STDOUT terraform:  } 2025-03-22 22:10:53.804419 | orchestrator | 22:10:53.804 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-03-22 22:10:53.804466 | orchestrator | 22:10:53.804 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.804493 | orchestrator | 22:10:53.804 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.804523 | orchestrator | 22:10:53.804 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.804549 | orchestrator | 22:10:53.804 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.804577 | orchestrator | 22:10:53.804 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.804604 | orchestrator | 22:10:53.804 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.804613 | orchestrator | 22:10:53.804 STDOUT terraform:  } 2025-03-22 22:10:53.804662 | orchestrator | 22:10:53.804 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-03-22 22:10:53.804710 | orchestrator | 22:10:53.804 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.804738 | orchestrator | 22:10:53.804 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.804765 | orchestrator | 22:10:53.804 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.804793 | orchestrator | 22:10:53.804 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.804821 | orchestrator | 22:10:53.804 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.804847 | orchestrator | 22:10:53.804 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.804856 | orchestrator | 22:10:53.804 STDOUT terraform:  } 2025-03-22 22:10:53.804906 | orchestrator | 22:10:53.804 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-03-22 22:10:53.804953 | orchestrator | 22:10:53.804 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.804981 | orchestrator | 22:10:53.804 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.805009 | orchestrator | 22:10:53.804 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.805036 | orchestrator | 22:10:53.805 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.805064 | orchestrator | 22:10:53.805 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.805092 | orchestrator | 22:10:53.805 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.805102 | orchestrator | 22:10:53.805 STDOUT terraform:  } 2025-03-22 22:10:53.805150 | orchestrator | 22:10:53.805 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-03-22 22:10:53.805198 | orchestrator | 22:10:53.805 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.805255 | orchestrator | 22:10:53.805 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.805270 | orchestrator | 22:10:53.805 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.805296 | orchestrator | 22:10:53.805 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.805323 | orchestrator | 22:10:53.805 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.805351 | orchestrator | 22:10:53.805 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.805365 | orchestrator | 22:10:53.805 STDOUT terraform:  } 2025-03-22 22:10:53.805410 | orchestrator | 22:10:53.805 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-03-22 22:10:53.805457 | orchestrator | 22:10:53.805 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.805484 | orchestrator | 22:10:53.805 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.805512 | orchestrator | 22:10:53.805 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.805539 | orchestrator | 22:10:53.805 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.805567 | orchestrator | 22:10:53.805 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.805595 | orchestrator | 22:10:53.805 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.805605 | orchestrator | 22:10:53.805 STDOUT terraform:  } 2025-03-22 22:10:53.805654 | orchestrator | 22:10:53.805 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-03-22 22:10:53.805701 | orchestrator | 22:10:53.805 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.805728 | orchestrator | 22:10:53.805 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.805757 | orchestrator | 22:10:53.805 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.805783 | orchestrator | 22:10:53.805 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.805810 | orchestrator | 22:10:53.805 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.805837 | orchestrator | 22:10:53.805 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.805845 | orchestrator | 22:10:53.805 STDOUT terraform:  } 2025-03-22 22:10:53.805897 | orchestrator | 22:10:53.805 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-03-22 22:10:53.805944 | orchestrator | 22:10:53.805 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.805971 | orchestrator | 22:10:53.805 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.805999 | orchestrator | 22:10:53.805 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.806042 | orchestrator | 22:10:53.805 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.806065 | orchestrator | 22:10:53.806 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.806091 | orchestrator | 22:10:53.806 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.806147 | orchestrator | 22:10:53.806 STDOUT terraform:  } 2025-03-22 22:10:53.806155 | orchestrator | 22:10:53.806 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-03-22 22:10:53.806197 | orchestrator | 22:10:53.806 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-22 22:10:53.806230 | orchestrator | 22:10:53.806 STDOUT terraform:  + device = (known after apply) 2025-03-22 22:10:53.806258 | orchestrator | 22:10:53.806 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.806283 | orchestrator | 22:10:53.806 STDOUT terraform:  + instance_id = (known after apply) 2025-03-22 22:10:53.806310 | orchestrator | 22:10:53.806 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.806336 | orchestrator | 22:10:53.806 STDOUT terraform:  + volume_id = (known after apply) 2025-03-22 22:10:53.806399 | orchestrator | 22:10:53.806 STDOUT terraform:  } 2025-03-22 22:10:53.806407 | orchestrator | 22:10:53.806 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-03-22 22:10:53.806455 | orchestrator | 22:10:53.806 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-03-22 22:10:53.806482 | orchestrator | 22:10:53.806 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-22 22:10:53.806510 | orchestrator | 22:10:53.806 STDOUT terraform:  + floating_ip = (known after apply) 2025-03-22 22:10:53.806538 | orchestrator | 22:10:53.806 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.806565 | orchestrator | 22:10:53.806 STDOUT terraform:  + port_id = (known after apply) 2025-03-22 22:10:53.806595 | orchestrator | 22:10:53.806 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.806647 | orchestrator | 22:10:53.806 STDOUT terraform:  } 2025-03-22 22:10:53.806655 | orchestrator | 22:10:53.806 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-03-22 22:10:53.806695 | orchestrator | 22:10:53.806 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-03-22 22:10:53.806716 | orchestrator | 22:10:53.806 STDOUT terraform:  + address = (known after apply) 2025-03-22 22:10:53.806737 | orchestrator | 22:10:53.806 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.806757 | orchestrator | 22:10:53.806 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-22 22:10:53.806777 | orchestrator | 22:10:53.806 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.806798 | orchestrator | 22:10:53.806 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-22 22:10:53.806824 | orchestrator | 22:10:53.806 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.806832 | orchestrator | 22:10:53.806 STDOUT terraform:  + pool = "public" 2025-03-22 22:10:53.806864 | orchestrator | 22:10:53.806 STDOUT terraform:  + port_id = (known after apply) 2025-03-22 22:10:53.806885 | orchestrator | 22:10:53.806 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.806910 | orchestrator | 22:10:53.806 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.806931 | orchestrator | 22:10:53.806 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.806981 | orchestrator | 22:10:53.806 STDOUT terraform:  } 2025-03-22 22:10:53.806989 | orchestrator | 22:10:53.806 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-03-22 22:10:53.807026 | orchestrator | 22:10:53.806 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-03-22 22:10:53.807061 | orchestrator | 22:10:53.807 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.807097 | orchestrator | 22:10:53.807 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.807108 | orchestrator | 22:10:53.807 STDOUT terraform:  + availability_zone_hints = [ 2025-03-22 22:10:53.807129 | orchestrator | 22:10:53.807 STDOUT terraform:  + "nova", 2025-03-22 22:10:53.807171 | orchestrator | 22:10:53.807 STDOUT terraform:  ] 2025-03-22 22:10:53.807180 | orchestrator | 22:10:53.807 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-22 22:10:53.807208 | orchestrator | 22:10:53.807 STDOUT terraform:  + external = (known after apply) 2025-03-22 22:10:53.807383 | orchestrator | 22:10:53.807 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.807459 | orchestrator | 22:10:53.807 STDOUT terraform:  + mtu = (known after apply) 2025-03-22 22:10:53.807478 | orchestrator | 22:10:53.807 STDOUT terraform:  + name = "net-testbed-management" 2025-03-22 22:10:53.807492 | orchestrator | 22:10:53.807 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.807511 | orchestrator | 22:10:53.807 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.807527 | orchestrator | 22:10:53.807 STDOUT terraform:  2025-03-22 22:10:53.807541 | orchestrator | 22:10:53.807 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.807555 | orchestrator | 22:10:53.807 STDOUT terraform:  + shared = (known after apply) 2025-03-22 22:10:53.807572 | orchestrator | 22:10:53.807 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.807586 | orchestrator | 22:10:53.807 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-03-22 22:10:53.807600 | orchestrator | 22:10:53.807 STDOUT terraform:  + segments (known after apply) 2025-03-22 22:10:53.807618 | orchestrator | 22:10:53.807 STDOUT terraform:  } 2025-03-22 22:10:53.807691 | orchestrator | 22:10:53.807 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-03-22 22:10:53.807712 | orchestrator | 22:10:53.807 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-03-22 22:10:53.807727 | orchestrator | 22:10:53.807 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.807744 | orchestrator | 22:10:53.807 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.807761 | orchestrator | 22:10:53.807 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.807818 | orchestrator | 22:10:53.807 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.807837 | orchestrator | 22:10:53.807 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.807884 | orchestrator | 22:10:53.807 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.807902 | orchestrator | 22:10:53.807 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.807946 | orchestrator | 22:10:53.807 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.807964 | orchestrator | 22:10:53.807 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.808011 | orchestrator | 22:10:53.807 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.808061 | orchestrator | 22:10:53.807 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.808120 | orchestrator | 22:10:53.808 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.808139 | orchestrator | 22:10:53.808 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.808154 | orchestrator | 22:10:53.808 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.808171 | orchestrator | 22:10:53.808 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.808188 | orchestrator | 22:10:53.808 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.808205 | orchestrator | 22:10:53.808 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.808248 | orchestrator | 22:10:53.808 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.808263 | orchestrator | 22:10:53.808 STDOUT terraform:  } 2025-03-22 22:10:53.808280 | orchestrator | 22:10:53.808 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.808295 | orchestrator | 22:10:53.808 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.808312 | orchestrator | 22:10:53.808 STDOUT terraform:  } 2025-03-22 22:10:53.808326 | orchestrator | 22:10:53.808 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.808344 | orchestrator | 22:10:53.808 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.808387 | orchestrator | 22:10:53.808 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-03-22 22:10:53.808406 | orchestrator | 22:10:53.808 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.808448 | orchestrator | 22:10:53.808 STDOUT terraform:  } 2025-03-22 22:10:53.808463 | orchestrator | 22:10:53.808 STDOUT terraform:  } 2025-03-22 22:10:53.808481 | orchestrator | 22:10:53.808 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-03-22 22:10:53.808532 | orchestrator | 22:10:53.808 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-22 22:10:53.808551 | orchestrator | 22:10:53.808 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.808566 | orchestrator | 22:10:53.808 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.808583 | orchestrator | 22:10:53.808 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.808600 | orchestrator | 22:10:53.808 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.808656 | orchestrator | 22:10:53.808 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.808675 | orchestrator | 22:10:53.808 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.808719 | orchestrator | 22:10:53.808 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.808737 | orchestrator | 22:10:53.808 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.808782 | orchestrator | 22:10:53.808 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.808800 | orchestrator | 22:10:53.808 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.808838 | orchestrator | 22:10:53.808 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.808886 | orchestrator | 22:10:53.808 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.808904 | orchestrator | 22:10:53.808 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.808948 | orchestrator | 22:10:53.808 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.808966 | orchestrator | 22:10:53.808 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.809011 | orchestrator | 22:10:53.808 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.809064 | orchestrator | 22:10:53.808 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.809082 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.809097 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809112 | orchestrator | 22:10:53.809 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.809129 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-22 22:10:53.809143 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809157 | orchestrator | 22:10:53.809 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.809174 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.809189 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809203 | orchestrator | 22:10:53.809 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.809242 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-22 22:10:53.809295 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809316 | orchestrator | 22:10:53.809 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.809331 | orchestrator | 22:10:53.809 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.809349 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-03-22 22:10:53.809413 | orchestrator | 22:10:53.809 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.809429 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809443 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809459 | orchestrator | 22:10:53.809 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-03-22 22:10:53.809477 | orchestrator | 22:10:53.809 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-22 22:10:53.809491 | orchestrator | 22:10:53.809 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.809506 | orchestrator | 22:10:53.809 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.809523 | orchestrator | 22:10:53.809 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.809579 | orchestrator | 22:10:53.809 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.809599 | orchestrator | 22:10:53.809 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.809621 | orchestrator | 22:10:53.809 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.809639 | orchestrator | 22:10:53.809 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.809656 | orchestrator | 22:10:53.809 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.809703 | orchestrator | 22:10:53.809 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.809721 | orchestrator | 22:10:53.809 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.809766 | orchestrator | 22:10:53.809 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.809784 | orchestrator | 22:10:53.809 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.809839 | orchestrator | 22:10:53.809 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.809857 | orchestrator | 22:10:53.809 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.809901 | orchestrator | 22:10:53.809 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.809919 | orchestrator | 22:10:53.809 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.809936 | orchestrator | 22:10:53.809 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.809954 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.809971 | orchestrator | 22:10:53.809 STDOUT terraform:  } 2025-03-22 22:10:53.809988 | orchestrator | 22:10:53.809 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.810005 | orchestrator | 22:10:53.809 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-22 22:10:53.810048 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.810079 | orchestrator | 22:10:53.810 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.810097 | orchestrator | 22:10:53.810 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.810112 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.810129 | orchestrator | 22:10:53.810 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.810143 | orchestrator | 22:10:53.810 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-22 22:10:53.810160 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.810235 | orchestrator | 22:10:53.810 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.810251 | orchestrator | 22:10:53.810 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.810265 | orchestrator | 22:10:53.810 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-03-22 22:10:53.810282 | orchestrator | 22:10:53.810 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.810328 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.810343 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.810360 | orchestrator | 22:10:53.810 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-03-22 22:10:53.810387 | orchestrator | 22:10:53.810 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-22 22:10:53.810413 | orchestrator | 22:10:53.810 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.810467 | orchestrator | 22:10:53.810 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.810486 | orchestrator | 22:10:53.810 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.810536 | orchestrator | 22:10:53.810 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.810555 | orchestrator | 22:10:53.810 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.810607 | orchestrator | 22:10:53.810 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.810626 | orchestrator | 22:10:53.810 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.810670 | orchestrator | 22:10:53.810 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.810689 | orchestrator | 22:10:53.810 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.810730 | orchestrator | 22:10:53.810 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.810749 | orchestrator | 22:10:53.810 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.810802 | orchestrator | 22:10:53.810 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.810821 | orchestrator | 22:10:53.810 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.810869 | orchestrator | 22:10:53.810 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.810888 | orchestrator | 22:10:53.810 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.810902 | orchestrator | 22:10:53.810 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.810919 | orchestrator | 22:10:53.810 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.810933 | orchestrator | 22:10:53.810 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.810950 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.810994 | orchestrator | 22:10:53.810 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.811012 | orchestrator | 22:10:53.810 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-22 22:10:53.811026 | orchestrator | 22:10:53.810 STDOUT terraform:  } 2025-03-22 22:10:53.811040 | orchestrator | 22:10:53.810 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.811057 | orchestrator | 22:10:53.810 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.811071 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.811085 | orchestrator | 22:10:53.811 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.811102 | orchestrator | 22:10:53.811 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-22 22:10:53.811117 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.811131 | orchestrator | 22:10:53.811 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.811148 | orchestrator | 22:10:53.811 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.811169 | orchestrator | 22:10:53.811 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-03-22 22:10:53.811192 | orchestrator | 22:10:53.811 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.811251 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.811290 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.811310 | orchestrator | 22:10:53.811 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-03-22 22:10:53.811325 | orchestrator | 22:10:53.811 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-22 22:10:53.811339 | orchestrator | 22:10:53.811 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.811355 | orchestrator | 22:10:53.811 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.811412 | orchestrator | 22:10:53.811 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.811431 | orchestrator | 22:10:53.811 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.811473 | orchestrator | 22:10:53.811 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.811491 | orchestrator | 22:10:53.811 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.811544 | orchestrator | 22:10:53.811 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.811562 | orchestrator | 22:10:53.811 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.811612 | orchestrator | 22:10:53.811 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.811631 | orchestrator | 22:10:53.811 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.811672 | orchestrator | 22:10:53.811 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.811690 | orchestrator | 22:10:53.811 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.811744 | orchestrator | 22:10:53.811 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.811762 | orchestrator | 22:10:53.811 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.811804 | orchestrator | 22:10:53.811 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.811822 | orchestrator | 22:10:53.811 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.811864 | orchestrator | 22:10:53.811 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.811882 | orchestrator | 22:10:53.811 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.811897 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.811911 | orchestrator | 22:10:53.811 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.811928 | orchestrator | 22:10:53.811 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-22 22:10:53.811979 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.812002 | orchestrator | 22:10:53.811 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.812020 | orchestrator | 22:10:53.811 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.812047 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.812062 | orchestrator | 22:10:53.811 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.812076 | orchestrator | 22:10:53.811 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-22 22:10:53.812090 | orchestrator | 22:10:53.811 STDOUT terraform:  } 2025-03-22 22:10:53.812107 | orchestrator | 22:10:53.812 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.812170 | orchestrator | 22:10:53.812 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.812185 | orchestrator | 22:10:53.812 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-03-22 22:10:53.812199 | orchestrator | 22:10:53.812 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.812239 | orchestrator | 22:10:53.812 STDOUT terraform:  } 2025-03-22 22:10:53.812260 | orchestrator | 22:10:53.812 STDOUT terraform:  } 2025-03-22 22:10:53.812278 | orchestrator | 22:10:53.812 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-03-22 22:10:53.812323 | orchestrator | 22:10:53.812 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-22 22:10:53.812338 | orchestrator | 22:10:53.812 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.812352 | orchestrator | 22:10:53.812 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.812370 | orchestrator | 22:10:53.812 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.812423 | orchestrator | 22:10:53.812 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.812438 | orchestrator | 22:10:53.812 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.812455 | orchestrator | 22:10:53.812 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.812469 | orchestrator | 22:10:53.812 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.812522 | orchestrator | 22:10:53.812 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.812542 | orchestrator | 22:10:53.812 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.812580 | orchestrator | 22:10:53.812 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.812598 | orchestrator | 22:10:53.812 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.812691 | orchestrator | 22:10:53.812 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.812714 | orchestrator | 22:10:53.812 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.812738 | orchestrator | 22:10:53.812 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.812745 | orchestrator | 22:10:53.812 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.812753 | orchestrator | 22:10:53.812 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.812782 | orchestrator | 22:10:53.812 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.812790 | orchestrator | 22:10:53.812 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.812814 | orchestrator | 22:10:53.812 STDOUT terraform:  } 2025-03-22 22:10:53.812822 | orchestrator | 22:10:53.812 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.812840 | orchestrator | 22:10:53.812 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-22 22:10:53.812847 | orchestrator | 22:10:53.812 STDOUT terraform:  } 2025-03-22 22:10:53.812868 | orchestrator | 22:10:53.812 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.812893 | orchestrator | 22:10:53.812 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.812900 | orchestrator | 22:10:53.812 STDOUT terraform:  } 2025-03-22 22:10:53.812940 | orchestrator | 22:10:53.812 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.812963 | orchestrator | 22:10:53.812 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-22 22:10:53.812970 | orchestrator | 22:10:53.812 STDOUT terraform:  } 2025-03-22 22:10:53.812997 | orchestrator | 22:10:53.812 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.813004 | orchestrator | 22:10:53.812 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.813032 | orchestrator | 22:10:53.813 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-03-22 22:10:53.813060 | orchestrator | 22:10:53.813 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.813067 | orchestrator | 22:10:53.813 STDOUT terraform:  } 2025-03-22 22:10:53.813084 | orchestrator | 22:10:53.813 STDOUT terraform:  } 2025-03-22 22:10:53.813130 | orchestrator | 22:10:53.813 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-03-22 22:10:53.813175 | orchestrator | 22:10:53.813 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-22 22:10:53.813218 | orchestrator | 22:10:53.813 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.813252 | orchestrator | 22:10:53.813 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-22 22:10:53.813319 | orchestrator | 22:10:53.813 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-22 22:10:53.814770 | orchestrator | 22:10:53.813 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.817299 | orchestrator | 22:10:53.814 STDOUT terraform:  + device_id = (known after apply) 2025-03-22 22:10:53.817648 | orchestrator | 22:10:53.814 STDOUT terraform:  + device_owner = (known after apply) 2025-03-22 22:10:53.817713 | orchestrator | 22:10:53.817 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-22 22:10:53.817723 | orchestrator | 22:10:53.817 STDOUT terraform:  + dns_name = (known after apply) 2025-03-22 22:10:53.817731 | orchestrator | 22:10:53.817 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.817789 | orchestrator | 22:10:53.817 STDOUT terraform:  + mac_address = (known after apply) 2025-03-22 22:10:53.817870 | orchestrator | 22:10:53.817 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.818144 | orchestrator | 22:10:53.817 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-22 22:10:53.818198 | orchestrator | 22:10:53.817 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-22 22:10:53.818237 | orchestrator | 22:10:53.818 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.818245 | orchestrator | 22:10:53.818 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-22 22:10:53.818291 | orchestrator | 22:10:53.818 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.818338 | orchestrator | 22:10:53.818 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.818403 | orchestrator | 22:10:53.818 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-22 22:10:53.818433 | orchestrator | 22:10:53.818 STDOUT terraform:  } 2025-03-22 22:10:53.818476 | orchestrator | 22:10:53.818 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.818541 | orchestrator | 22:10:53.818 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-22 22:10:53.818571 | orchestrator | 22:10:53.818 STDOUT terraform:  } 2025-03-22 22:10:53.818613 | orchestrator | 22:10:53.818 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.818680 | orchestrator | 22:10:53.818 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-22 22:10:53.818710 | orchestrator | 22:10:53.818 STDOUT terraform:  } 2025-03-22 22:10:53.818753 | orchestrator | 22:10:53.818 STDOUT terraform:  + allowed_address_pairs { 2025-03-22 22:10:53.818814 | orchestrator | 22:10:53.818 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-22 22:10:53.818846 | orchestrator | 22:10:53.818 STDOUT terraform:  } 2025-03-22 22:10:53.818898 | orchestrator | 22:10:53.818 STDOUT terraform:  + binding (known after apply) 2025-03-22 22:10:53.818929 | orchestrator | 22:10:53.818 STDOUT terraform:  + fixed_ip { 2025-03-22 22:10:53.818985 | orchestrator | 22:10:53.818 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-03-22 22:10:53.819051 | orchestrator | 22:10:53.818 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.819078 | orchestrator | 22:10:53.819 STDOUT terraform:  } 2025-03-22 22:10:53.819107 | orchestrator | 22:10:53.819 STDOUT terraform:  } 2025-03-22 22:10:53.819254 | orchestrator | 22:10:53.819 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-03-22 22:10:53.819365 | orchestrator | 22:10:53.819 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-03-22 22:10:53.819405 | orchestrator | 22:10:53.819 STDOUT terraform:  + force_destroy = false 2025-03-22 22:10:53.819466 | orchestrator | 22:10:53.819 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.819524 | orchestrator | 22:10:53.819 STDOUT terraform:  + port_id = (known after apply) 2025-03-22 22:10:53.819582 | orchestrator | 22:10:53.819 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.819641 | orchestrator | 22:10:53.819 STDOUT terraform:  + router_id = (known after apply) 2025-03-22 22:10:53.819698 | orchestrator | 22:10:53.819 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-22 22:10:53.819725 | orchestrator | 22:10:53.819 STDOUT terraform:  } 2025-03-22 22:10:53.819797 | orchestrator | 22:10:53.819 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-03-22 22:10:53.819872 | orchestrator | 22:10:53.819 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-03-22 22:10:53.819945 | orchestrator | 22:10:53.819 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-22 22:10:53.820019 | orchestrator | 22:10:53.819 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.820064 | orchestrator | 22:10:53.820 STDOUT terraform:  + availability_zone_hints = [ 2025-03-22 22:10:53.820092 | orchestrator | 22:10:53.820 STDOUT terraform:  + "nova", 2025-03-22 22:10:53.820119 | orchestrator | 22:10:53.820 STDOUT terraform:  ] 2025-03-22 22:10:53.820192 | orchestrator | 22:10:53.820 STDOUT terraform:  + distributed = (known after apply) 2025-03-22 22:10:53.820277 | orchestrator | 22:10:53.820 STDOUT terraform:  + enable_snat = (known after apply) 2025-03-22 22:10:53.820392 | orchestrator | 22:10:53.820 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-03-22 22:10:53.820472 | orchestrator | 22:10:53.820 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.820530 | orchestrator | 22:10:53.820 STDOUT terraform:  + name = "testbed" 2025-03-22 22:10:53.820604 | orchestrator | 22:10:53.820 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.820677 | orchestrator | 22:10:53.820 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.820734 | orchestrator | 22:10:53.820 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-03-22 22:10:53.820761 | orchestrator | 22:10:53.820 STDOUT terraform:  } 2025-03-22 22:10:53.820870 | orchestrator | 22:10:53.820 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-03-22 22:10:53.820979 | orchestrator | 22:10:53.820 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-03-22 22:10:53.821022 | orchestrator | 22:10:53.820 STDOUT terraform:  + description = "ssh" 2025-03-22 22:10:53.821068 | orchestrator | 22:10:53.821 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.821111 | orchestrator | 22:10:53.821 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.821173 | orchestrator | 22:10:53.821 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.821222 | orchestrator | 22:10:53.821 STDOUT terraform:  + port_range_max = 22 2025-03-22 22:10:53.821267 | orchestrator | 22:10:53.821 STDOUT terraform:  + port_range_min = 22 2025-03-22 22:10:53.821302 | orchestrator | 22:10:53.821 STDOUT terraform:  + protocol = "tcp" 2025-03-22 22:10:53.821364 | orchestrator | 22:10:53.821 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.821423 | orchestrator | 22:10:53.821 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.821474 | orchestrator | 22:10:53.821 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.821532 | orchestrator | 22:10:53.821 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.821594 | orchestrator | 22:10:53.821 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.821619 | orchestrator | 22:10:53.821 STDOUT terraform:  } 2025-03-22 22:10:53.821726 | orchestrator | 22:10:53.821 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-03-22 22:10:53.821834 | orchestrator | 22:10:53.821 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-03-22 22:10:53.821882 | orchestrator | 22:10:53.821 STDOUT terraform:  + description = "wireguard" 2025-03-22 22:10:53.821930 | orchestrator | 22:10:53.821 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.821971 | orchestrator | 22:10:53.821 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.822067 | orchestrator | 22:10:53.821 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.822108 | orchestrator | 22:10:53.822 STDOUT terraform:  + port_range_max = 51820 2025-03-22 22:10:53.822148 | orchestrator | 22:10:53.822 STDOUT terraform:  + port_range_min = 51820 2025-03-22 22:10:53.822187 | orchestrator | 22:10:53.822 STDOUT terraform:  + protocol = "udp" 2025-03-22 22:10:53.822283 | orchestrator | 22:10:53.822 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.822342 | orchestrator | 22:10:53.822 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.822391 | orchestrator | 22:10:53.822 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.822452 | orchestrator | 22:10:53.822 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.822516 | orchestrator | 22:10:53.822 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.822540 | orchestrator | 22:10:53.822 STDOUT terraform:  } 2025-03-22 22:10:53.822646 | orchestrator | 22:10:53.822 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-03-22 22:10:53.822738 | orchestrator | 22:10:53.822 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-03-22 22:10:53.822778 | orchestrator | 22:10:53.822 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.822811 | orchestrator | 22:10:53.822 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.822863 | orchestrator | 22:10:53.822 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.822897 | orchestrator | 22:10:53.822 STDOUT terraform:  + protocol = "tcp" 2025-03-22 22:10:53.822948 | orchestrator | 22:10:53.822 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.822999 | orchestrator | 22:10:53.822 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.823048 | orchestrator | 22:10:53.822 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-22 22:10:53.823099 | orchestrator | 22:10:53.823 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.823152 | orchestrator | 22:10:53.823 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.823175 | orchestrator | 22:10:53.823 STDOUT terraform:  } 2025-03-22 22:10:53.823276 | orchestrator | 22:10:53.823 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-03-22 22:10:53.823366 | orchestrator | 22:10:53.823 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-03-22 22:10:53.823407 | orchestrator | 22:10:53.823 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.823441 | orchestrator | 22:10:53.823 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.823493 | orchestrator | 22:10:53.823 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.823528 | orchestrator | 22:10:53.823 STDOUT terraform:  + protocol = "udp" 2025-03-22 22:10:53.823581 | orchestrator | 22:10:53.823 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.823631 | orchestrator | 22:10:53.823 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.823682 | orchestrator | 22:10:53.823 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-22 22:10:53.823732 | orchestrator | 22:10:53.823 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.823784 | orchestrator | 22:10:53.823 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.823806 | orchestrator | 22:10:53.823 STDOUT terraform:  } 2025-03-22 22:10:53.823897 | orchestrator | 22:10:53.823 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-03-22 22:10:53.823990 | orchestrator | 22:10:53.823 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-03-22 22:10:53.824031 | orchestrator | 22:10:53.823 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.824065 | orchestrator | 22:10:53.824 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.824117 | orchestrator | 22:10:53.824 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.824152 | orchestrator | 22:10:53.824 STDOUT terraform:  + protocol = "icmp" 2025-03-22 22:10:53.824204 | orchestrator | 22:10:53.824 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.824263 | orchestrator | 22:10:53.824 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.824305 | orchestrator | 22:10:53.824 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.824355 | orchestrator | 22:10:53.824 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.824406 | orchestrator | 22:10:53.824 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.824428 | orchestrator | 22:10:53.824 STDOUT terraform:  } 2025-03-22 22:10:53.824514 | orchestrator | 22:10:53.824 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-03-22 22:10:53.824601 | orchestrator | 22:10:53.824 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-03-22 22:10:53.824640 | orchestrator | 22:10:53.824 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.824675 | orchestrator | 22:10:53.824 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.824728 | orchestrator | 22:10:53.824 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.824762 | orchestrator | 22:10:53.824 STDOUT terraform:  + protocol = "tcp" 2025-03-22 22:10:53.824813 | orchestrator | 22:10:53.824 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.824864 | orchestrator | 22:10:53.824 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.824921 | orchestrator | 22:10:53.824 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.824957 | orchestrator | 22:10:53.824 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.825008 | orchestrator | 22:10:53.824 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.825030 | orchestrator | 22:10:53.825 STDOUT terraform:  } 2025-03-22 22:10:53.825118 | orchestrator | 22:10:53.825 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-03-22 22:10:53.825205 | orchestrator | 22:10:53.825 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-03-22 22:10:53.825269 | orchestrator | 22:10:53.825 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.825303 | orchestrator | 22:10:53.825 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.825356 | orchestrator | 22:10:53.825 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.825389 | orchestrator | 22:10:53.825 STDOUT terraform:  + protocol = "udp" 2025-03-22 22:10:53.825438 | orchestrator | 22:10:53.825 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.825485 | orchestrator | 22:10:53.825 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.825523 | orchestrator | 22:10:53.825 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.825571 | orchestrator | 22:10:53.825 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.825618 | orchestrator | 22:10:53.825 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.825638 | orchestrator | 22:10:53.825 STDOUT terraform:  } 2025-03-22 22:10:53.825720 | orchestrator | 22:10:53.825 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-03-22 22:10:53.825802 | orchestrator | 22:10:53.825 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-03-22 22:10:53.825842 | orchestrator | 22:10:53.825 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.825875 | orchestrator | 22:10:53.825 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.825923 | orchestrator | 22:10:53.825 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.825956 | orchestrator | 22:10:53.825 STDOUT terraform:  + protocol = "icmp" 2025-03-22 22:10:53.826003 | orchestrator | 22:10:53.825 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.826069 | orchestrator | 22:10:53.826 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.826108 | orchestrator | 22:10:53.826 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.826155 | orchestrator | 22:10:53.826 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.826204 | orchestrator | 22:10:53.826 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.826232 | orchestrator | 22:10:53.826 STDOUT terraform:  } 2025-03-22 22:10:53.826311 | orchestrator | 22:10:53.826 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-03-22 22:10:53.826391 | orchestrator | 22:10:53.826 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-03-22 22:10:53.826422 | orchestrator | 22:10:53.826 STDOUT terraform:  + description = "vrrp" 2025-03-22 22:10:53.826460 | orchestrator | 22:10:53.826 STDOUT terraform:  + direction = "ingress" 2025-03-22 22:10:53.826491 | orchestrator | 22:10:53.826 STDOUT terraform:  + ethertype = "IPv4" 2025-03-22 22:10:53.826540 | orchestrator | 22:10:53.826 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.826571 | orchestrator | 22:10:53.826 STDOUT terraform:  + protocol = "112" 2025-03-22 22:10:53.826619 | orchestrator | 22:10:53.826 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.826668 | orchestrator | 22:10:53.826 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-22 22:10:53.826705 | orchestrator | 22:10:53.826 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-22 22:10:53.826753 | orchestrator | 22:10:53.826 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-22 22:10:53.826800 | orchestrator | 22:10:53.826 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.826816 | orchestrator | 22:10:53.826 STDOUT terraform:  } 2025-03-22 22:10:53.826892 | orchestrator | 22:10:53.826 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-03-22 22:10:53.826969 | orchestrator | 22:10:53.826 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-03-22 22:10:53.827014 | orchestrator | 22:10:53.826 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.827067 | orchestrator | 22:10:53.827 STDOUT terraform:  + description = "management security group" 2025-03-22 22:10:53.827112 | orchestrator | 22:10:53.827 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.827156 | orchestrator | 22:10:53.827 STDOUT terraform:  + name = "testbed-management" 2025-03-22 22:10:53.827200 | orchestrator | 22:10:53.827 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.827260 | orchestrator | 22:10:53.827 STDOUT terraform:  + stateful = (known after apply) 2025-03-22 22:10:53.827305 | orchestrator | 22:10:53.827 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.827327 | orchestrator | 22:10:53.827 STDOUT terraform:  } 2025-03-22 22:10:53.827403 | orchestrator | 22:10:53.827 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-03-22 22:10:53.827476 | orchestrator | 22:10:53.827 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-03-22 22:10:53.827520 | orchestrator | 22:10:53.827 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.827566 | orchestrator | 22:10:53.827 STDOUT terraform:  + description = "node security group" 2025-03-22 22:10:53.827611 | orchestrator | 22:10:53.827 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.827648 | orchestrator | 22:10:53.827 STDOUT terraform:  + name = "testbed-node" 2025-03-22 22:10:53.827692 | orchestrator | 22:10:53.827 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.827736 | orchestrator | 22:10:53.827 STDOUT terraform:  + stateful = (known after apply) 2025-03-22 22:10:53.827780 | orchestrator | 22:10:53.827 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.827801 | orchestrator | 22:10:53.827 STDOUT terraform:  } 2025-03-22 22:10:53.827872 | orchestrator | 22:10:53.827 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-03-22 22:10:53.827943 | orchestrator | 22:10:53.827 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-03-22 22:10:53.827991 | orchestrator | 22:10:53.827 STDOUT terraform:  + all_tags = (known after apply) 2025-03-22 22:10:53.828039 | orchestrator | 22:10:53.827 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-03-22 22:10:53.828073 | orchestrator | 22:10:53.828 STDOUT terraform:  + dns_nameservers = [ 2025-03-22 22:10:53.828099 | orchestrator | 22:10:53.828 STDOUT terraform:  + "8.8.8.8", 2025-03-22 22:10:53.828124 | orchestrator | 22:10:53.828 STDOUT terraform:  + "9.9.9.9", 2025-03-22 22:10:53.828145 | orchestrator | 22:10:53.828 STDOUT terraform:  ] 2025-03-22 22:10:53.828176 | orchestrator | 22:10:53.828 STDOUT terraform:  + enable_dhcp = true 2025-03-22 22:10:53.828320 | orchestrator | 22:10:53.828 STDOUT terraform:  + gateway_ip = (known after apply) 2025-03-22 22:10:53.828363 | orchestrator | 22:10:53.828 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.828381 | orchestrator | 22:10:53.828 STDOUT terraform:  + ip_version = 4 2025-03-22 22:10:53.828402 | orchestrator | 22:10:53.828 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-03-22 22:10:53.828460 | orchestrator | 22:10:53.828 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-03-22 22:10:53.828479 | orchestrator | 22:10:53.828 STDOUT terraform:  + name = "subnet-testbed-management" 2025-03-22 22:10:53.828495 | orchestrator | 22:10:53.828 STDOUT terraform:  + network_id = (known after apply) 2025-03-22 22:10:53.828531 | orchestrator | 22:10:53.828 STDOUT terraform:  + no_gateway = false 2025-03-22 22:10:53.828580 | orchestrator | 22:10:53.828 STDOUT terraform:  + region = (known after apply) 2025-03-22 22:10:53.828628 | orchestrator | 22:10:53.828 STDOUT terraform:  + service_types = (known after apply) 2025-03-22 22:10:53.828676 | orchestrator | 22:10:53.828 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-22 22:10:53.828693 | orchestrator | 22:10:53.828 STDOUT terraform:  + allocation_pool { 2025-03-22 22:10:53.828740 | orchestrator | 22:10:53.828 STDOUT terraform:  + end = "192.168.31.250" 2025-03-22 22:10:53.828786 | orchestrator | 22:10:53.828 STDOUT terraform:  + start = "192.168.31.200" 2025-03-22 22:10:53.828801 | orchestrator | 22:10:53.828 STDOUT terraform:  } 2025-03-22 22:10:53.828816 | orchestrator | 22:10:53.828 STDOUT terraform:  } 2025-03-22 22:10:53.828831 | orchestrator | 22:10:53.828 STDOUT terraform:  # terraform_data.image will be created 2025-03-22 22:10:53.828879 | orchestrator | 22:10:53.828 STDOUT terraform:  + resource "terraform_data" "image" { 2025-03-22 22:10:53.828925 | orchestrator | 22:10:53.828 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.828954 | orchestrator | 22:10:53.828 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-22 22:10:53.828970 | orchestrator | 22:10:53.828 STDOUT terraform:  + output = (known after apply) 2025-03-22 22:10:53.828985 | orchestrator | 22:10:53.828 STDOUT terraform:  } 2025-03-22 22:10:53.829037 | orchestrator | 22:10:53.828 STDOUT terraform:  # terraform_data.image_node will be created 2025-03-22 22:10:53.829082 | orchestrator | 22:10:53.829 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-03-22 22:10:53.829128 | orchestrator | 22:10:53.829 STDOUT terraform:  + id = (known after apply) 2025-03-22 22:10:53.829145 | orchestrator | 22:10:53.829 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-22 22:10:53.829181 | orchestrator | 22:10:53.829 STDOUT terraform:  + output = (known after apply) 2025-03-22 22:10:53.829263 | orchestrator | 22:10:53.829 STDOUT terraform:  } 2025-03-22 22:10:53.829282 | orchestrator | 22:10:53.829 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-03-22 22:10:53.829316 | orchestrator | 22:10:53.829 STDOUT terraform: Changes to Outputs: 2025-03-22 22:10:53.829334 | orchestrator | 22:10:53.829 STDOUT terraform:  + manager_address = (sensitive value) 2025-03-22 22:10:53.962623 | orchestrator | 22:10:53.829 STDOUT terraform:  + private_key = (sensitive value) 2025-03-22 22:10:53.962707 | orchestrator | 22:10:53.962 STDOUT terraform: terraform_data.image_node: Creating... 2025-03-22 22:10:53.962906 | orchestrator | 22:10:53.962 STDOUT terraform: terraform_data.image: Creating... 2025-03-22 22:10:53.962935 | orchestrator | 22:10:53.962 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=bfe77fb0-1c86-8183-89bd-ba0ff39e9ffb] 2025-03-22 22:10:53.962973 | orchestrator | 22:10:53.962 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=c8859f68-d2cf-940b-9f7d-8413e0029f5d] 2025-03-22 22:10:53.970899 | orchestrator | 22:10:53.970 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-03-22 22:10:53.977239 | orchestrator | 22:10:53.977 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-03-22 22:10:53.977295 | orchestrator | 22:10:53.977 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-03-22 22:10:53.977605 | orchestrator | 22:10:53.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-03-22 22:10:53.978317 | orchestrator | 22:10:53.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-03-22 22:10:53.979701 | orchestrator | 22:10:53.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-03-22 22:10:53.979795 | orchestrator | 22:10:53.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-03-22 22:10:53.981499 | orchestrator | 22:10:53.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-03-22 22:10:53.985495 | orchestrator | 22:10:53.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-03-22 22:10:53.985577 | orchestrator | 22:10:53.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-03-22 22:10:54.399482 | orchestrator | 22:10:54.399 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-22 22:10:54.406154 | orchestrator | 22:10:54.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-03-22 22:10:54.625052 | orchestrator | 22:10:54.624 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-03-22 22:10:54.629728 | orchestrator | 22:10:54.629 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-03-22 22:10:59.806860 | orchestrator | 22:10:59.806 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=966c0d23-091c-413f-9374-b0b6f8ed19d4] 2025-03-22 22:10:59.812271 | orchestrator | 22:10:59.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-03-22 22:11:03.979770 | orchestrator | 22:11:03.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-03-22 22:11:03.981948 | orchestrator | 22:11:03.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-03-22 22:11:03.982167 | orchestrator | 22:11:03.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-03-22 22:11:03.982246 | orchestrator | 22:11:03.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-03-22 22:11:03.982956 | orchestrator | 22:11:03.982 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-03-22 22:11:03.986309 | orchestrator | 22:11:03.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-03-22 22:11:03.986404 | orchestrator | 22:11:03.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-03-22 22:11:04.407105 | orchestrator | 22:11:04.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-03-22 22:11:04.555872 | orchestrator | 22:11:04.555 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=873c6414-afc7-40f1-8cf8-9106a041fae2] 2025-03-22 22:11:04.562159 | orchestrator | 22:11:04.559 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=b423c274-b2a0-4f0a-b616-ca1c2b60d0cd] 2025-03-22 22:11:04.564885 | orchestrator | 22:11:04.564 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-03-22 22:11:04.569311 | orchestrator | 22:11:04.569 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-03-22 22:11:04.574144 | orchestrator | 22:11:04.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=2e00c2ca-00f1-4fee-ad68-d47920d8c405] 2025-03-22 22:11:04.576511 | orchestrator | 22:11:04.576 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=2cb08140-af63-48ba-9aa2-c694b8b8e9ae] 2025-03-22 22:11:04.583641 | orchestrator | 22:11:04.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-03-22 22:11:04.592341 | orchestrator | 22:11:04.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-03-22 22:11:04.592395 | orchestrator | 22:11:04.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=036a8c60-8400-4952-a958-bb8a1eba60c8] 2025-03-22 22:11:04.596387 | orchestrator | 22:11:04.595 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=4dc307ff-74e9-4f4c-930f-bbccca46b507] 2025-03-22 22:11:04.599645 | orchestrator | 22:11:04.599 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-03-22 22:11:04.606897 | orchestrator | 22:11:04.606 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-03-22 22:11:04.624079 | orchestrator | 22:11:04.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=7d111a90-4829-481c-9373-d6d983f6493f] 2025-03-22 22:11:04.628894 | orchestrator | 22:11:04.628 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-03-22 22:11:04.630478 | orchestrator | 22:11:04.630 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-03-22 22:11:04.646960 | orchestrator | 22:11:04.646 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=0dd26abc-84c4-4e20-a660-b25ef0be7791] 2025-03-22 22:11:04.652568 | orchestrator | 22:11:04.652 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-03-22 22:11:04.802401 | orchestrator | 22:11:04.802 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=753a6438-823e-47df-a447-41be61353e18] 2025-03-22 22:11:04.809488 | orchestrator | 22:11:04.809 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-03-22 22:11:04.862857 | orchestrator | 22:11:04.862 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-22 22:11:04.870480 | orchestrator | 22:11:04.870 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-03-22 22:11:09.813084 | orchestrator | 22:11:09.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-03-22 22:11:09.983044 | orchestrator | 22:11:09.982 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=708adc92-3837-440f-909c-446edf0d18e7] 2025-03-22 22:11:09.993075 | orchestrator | 22:11:09.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-03-22 22:11:14.566440 | orchestrator | 22:11:14.566 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-03-22 22:11:14.570388 | orchestrator | 22:11:14.570 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-03-22 22:11:14.581625 | orchestrator | 22:11:14.581 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-03-22 22:11:14.583800 | orchestrator | 22:11:14.583 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-03-22 22:11:14.599935 | orchestrator | 22:11:14.599 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-03-22 22:11:14.607201 | orchestrator | 22:11:14.607 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-03-22 22:11:14.629569 | orchestrator | 22:11:14.629 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-03-22 22:11:14.652960 | orchestrator | 22:11:14.652 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-03-22 22:11:14.765125 | orchestrator | 22:11:14.764 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=57690c98-8cea-4402-9842-e7701133b4c4] 2025-03-22 22:11:14.779590 | orchestrator | 22:11:14.779 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-03-22 22:11:14.783780 | orchestrator | 22:11:14.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=eb7c221e-d8a1-4ef7-8d99-2a1292a3f844] 2025-03-22 22:11:14.796431 | orchestrator | 22:11:14.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-03-22 22:11:14.804857 | orchestrator | 22:11:14.804 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=18369482-6d33-4fed-9778-d084c11eaa5e] 2025-03-22 22:11:14.811561 | orchestrator | 22:11:14.811 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-03-22 22:11:14.812481 | orchestrator | 22:11:14.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=c7efb8bb-3838-4312-aa24-d1dd239694a4] 2025-03-22 22:11:14.827256 | orchestrator | 22:11:14.827 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-03-22 22:11:14.832715 | orchestrator | 22:11:14.832 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=e3c02c6539a70f9917999abd5b4bba86801423ee] 2025-03-22 22:11:14.834705 | orchestrator | 22:11:14.834 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=e5c5a971-0b0d-41c6-9ff1-2abaf7ae3fd0] 2025-03-22 22:11:14.839150 | orchestrator | 22:11:14.839 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-03-22 22:11:14.846431 | orchestrator | 22:11:14.846 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-03-22 22:11:14.849576 | orchestrator | 22:11:14.849 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c74566356bdd040f4e4900d28234c0e7d7e4e766] 2025-03-22 22:11:14.855044 | orchestrator | 22:11:14.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=af10e111-d90b-4be1-a196-da98d242bbc6] 2025-03-22 22:11:14.855371 | orchestrator | 22:11:14.855 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-03-22 22:11:14.866664 | orchestrator | 22:11:14.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=7e3894b5-9c16-4e11-bc6e-9b4e89d3d75f] 2025-03-22 22:11:14.871948 | orchestrator | 22:11:14.866 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-03-22 22:11:14.871998 | orchestrator | 22:11:14.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-03-22 22:11:14.883276 | orchestrator | 22:11:14.883 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=34749356-9908-4430-b6a3-abe4e540ecc5] 2025-03-22 22:11:15.175803 | orchestrator | 22:11:15.175 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=d944c393-c469-4703-9a84-253eb786ae38] 2025-03-22 22:11:19.994004 | orchestrator | 22:11:19.993 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-03-22 22:11:20.296596 | orchestrator | 22:11:20.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=56b5b16e-d0e2-43c3-a32a-de024f7ad133] 2025-03-22 22:11:20.562101 | orchestrator | 22:11:20.561 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=778a98f9-1661-42d7-a23d-0b25c1529a37] 2025-03-22 22:11:20.568976 | orchestrator | 22:11:20.568 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-03-22 22:11:24.781047 | orchestrator | 22:11:24.780 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-03-22 22:11:24.797164 | orchestrator | 22:11:24.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-03-22 22:11:24.812330 | orchestrator | 22:11:24.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-03-22 22:11:24.839875 | orchestrator | 22:11:24.839 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-03-22 22:11:24.867589 | orchestrator | 22:11:24.867 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-03-22 22:11:25.125574 | orchestrator | 22:11:25.125 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=0c48013b-5bc1-40b3-9531-ab510496cfc7] 2025-03-22 22:11:25.146389 | orchestrator | 22:11:25.145 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=8a3962f0-2328-4d8c-9b57-21936678184a] 2025-03-22 22:11:25.155936 | orchestrator | 22:11:25.155 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=9e9b02d0-ba34-4a5b-a8b6-7a2befe88955] 2025-03-22 22:11:25.180867 | orchestrator | 22:11:25.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=8830d5a0-b84d-4cff-a107-ff4c6c105a90] 2025-03-22 22:11:25.199803 | orchestrator | 22:11:25.199 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=3b01584c-6e94-4a24-9d90-8d39d9da60f7] 2025-03-22 22:11:27.115615 | orchestrator | 22:11:27.115 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=9729f4ed-7814-4c03-a280-7ed29ca27e74] 2025-03-22 22:11:27.121130 | orchestrator | 22:11:27.120 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-03-22 22:11:27.125936 | orchestrator | 22:11:27.125 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-03-22 22:11:27.127267 | orchestrator | 22:11:27.127 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-03-22 22:11:27.242736 | orchestrator | 22:11:27.242 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a69c1840-068e-4057-8f39-b170fa18fcd9] 2025-03-22 22:11:27.265624 | orchestrator | 22:11:27.265 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-03-22 22:11:27.267000 | orchestrator | 22:11:27.266 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-03-22 22:11:27.267513 | orchestrator | 22:11:27.267 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-03-22 22:11:27.267977 | orchestrator | 22:11:27.267 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-03-22 22:11:27.268249 | orchestrator | 22:11:27.268 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-03-22 22:11:27.275323 | orchestrator | 22:11:27.275 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-03-22 22:11:27.275508 | orchestrator | 22:11:27.275 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-03-22 22:11:27.275538 | orchestrator | 22:11:27.275 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-03-22 22:11:27.280434 | orchestrator | 22:11:27.280 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=86eb6239-7472-442d-98d8-19573646ba47] 2025-03-22 22:11:27.287395 | orchestrator | 22:11:27.287 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-03-22 22:11:27.372520 | orchestrator | 22:11:27.372 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=a4cc4178-75d3-4cb5-93cf-91caea7e1532] 2025-03-22 22:11:27.392991 | orchestrator | 22:11:27.392 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-03-22 22:11:27.542608 | orchestrator | 22:11:27.542 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=ea66a6ee-4bf1-405a-a24e-6c6df36e10fd] 2025-03-22 22:11:27.547731 | orchestrator | 22:11:27.547 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-03-22 22:11:27.660062 | orchestrator | 22:11:27.659 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=a6fedfe3-4964-4199-b42c-fc2a8156fef0] 2025-03-22 22:11:27.674820 | orchestrator | 22:11:27.674 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-03-22 22:11:27.761395 | orchestrator | 22:11:27.761 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=f74e8606-a9fe-4dde-a679-66b8c51de49f] 2025-03-22 22:11:27.764413 | orchestrator | 22:11:27.764 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=51a6e7c6-1bdb-4d03-935e-d9a85aaeb359] 2025-03-22 22:11:27.767348 | orchestrator | 22:11:27.767 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-03-22 22:11:27.770576 | orchestrator | 22:11:27.770 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-03-22 22:11:27.878415 | orchestrator | 22:11:27.877 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=be7b71e7-2896-4a4d-99d4-667e375cc22c] 2025-03-22 22:11:27.885547 | orchestrator | 22:11:27.885 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-03-22 22:11:27.987997 | orchestrator | 22:11:27.987 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=47697f62-b864-4491-888d-ed04cb5d0bf5] 2025-03-22 22:11:27.994612 | orchestrator | 22:11:27.994 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-03-22 22:11:28.094877 | orchestrator | 22:11:28.094 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=afc9f161-11f6-4e1d-9516-69def086717d] 2025-03-22 22:11:28.205600 | orchestrator | 22:11:28.205 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=7482754d-0949-4e18-84d3-ab9e294901ba] 2025-03-22 22:11:32.942475 | orchestrator | 22:11:32.941 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=3e5f230b-f6c1-46b3-9916-29fa69fe7db2] 2025-03-22 22:11:32.957333 | orchestrator | 22:11:32.957 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=1972c6d6-3390-44d1-9707-d3682d47c7d3] 2025-03-22 22:11:33.108470 | orchestrator | 22:11:33.108 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=337b6519-de4f-4457-8416-29ed40a26071] 2025-03-22 22:11:33.125466 | orchestrator | 22:11:33.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=5dfa4794-f4fe-4750-a561-18f901a45232] 2025-03-22 22:11:33.261977 | orchestrator | 22:11:33.261 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=24ae19bf-fc0e-4844-be42-accec51cc877] 2025-03-22 22:11:33.612863 | orchestrator | 22:11:33.612 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 7s [id=330013b0-4bb0-49fd-b3e2-762f8c58d5ff] 2025-03-22 22:11:33.677961 | orchestrator | 22:11:33.677 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=90c9586f-1857-46c4-83d8-082e9c74e49b] 2025-03-22 22:11:34.054854 | orchestrator | 22:11:34.054 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=e159b1ef-139d-4ba5-9fab-fb7c868930f7] 2025-03-22 22:11:34.081884 | orchestrator | 22:11:34.081 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-03-22 22:11:34.091481 | orchestrator | 22:11:34.091 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-03-22 22:11:34.098084 | orchestrator | 22:11:34.094 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-03-22 22:11:34.099645 | orchestrator | 22:11:34.095 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-03-22 22:11:34.099688 | orchestrator | 22:11:34.099 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-03-22 22:11:34.108527 | orchestrator | 22:11:34.108 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-03-22 22:11:34.115518 | orchestrator | 22:11:34.115 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-03-22 22:11:40.523906 | orchestrator | 22:11:40.523 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=56178df4-6971-4f9e-a27f-6f066a0feaf6] 2025-03-22 22:11:40.547255 | orchestrator | 22:11:40.546 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-03-22 22:11:40.552679 | orchestrator | 22:11:40.547 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-03-22 22:11:40.552747 | orchestrator | 22:11:40.547 STDOUT terraform: local_file.inventory: Creating... 2025-03-22 22:11:40.552774 | orchestrator | 22:11:40.552 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=888b8b4e0ec66d6c6343e6ea3298a473e0e9af3b] 2025-03-22 22:11:41.036612 | orchestrator | 22:11:40.552 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a4608283ca1b03c028b15cdf6b67015d8f50aca6] 2025-03-22 22:11:41.036743 | orchestrator | 22:11:41.036 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=56178df4-6971-4f9e-a27f-6f066a0feaf6] 2025-03-22 22:11:44.093002 | orchestrator | 22:11:44.092 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-03-22 22:11:44.097281 | orchestrator | 22:11:44.097 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-03-22 22:11:44.097525 | orchestrator | 22:11:44.097 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-03-22 22:11:44.102491 | orchestrator | 22:11:44.102 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-03-22 22:11:44.110699 | orchestrator | 22:11:44.110 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-03-22 22:11:44.115953 | orchestrator | 22:11:44.115 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-03-22 22:11:54.094498 | orchestrator | 22:11:54.094 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-03-22 22:11:54.097394 | orchestrator | 22:11:54.097 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-03-22 22:11:54.098498 | orchestrator | 22:11:54.098 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-03-22 22:11:54.103090 | orchestrator | 22:11:54.102 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-03-22 22:11:54.111422 | orchestrator | 22:11:54.111 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-03-22 22:11:54.116766 | orchestrator | 22:11:54.116 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-03-22 22:11:54.285247 | orchestrator | 22:11:54.284 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=586e7cce-cf8d-4d2c-b8f9-3ed910395e5c] 2025-03-22 22:11:54.308254 | orchestrator | 22:11:54.307 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=ec3ab099-f9e8-4d3d-ad36-5c98ea5cb15e] 2025-03-22 22:11:54.498932 | orchestrator | 22:11:54.498 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=0e3d7321-682d-4ea1-9180-80bac4331eef] 2025-03-22 22:11:54.538570 | orchestrator | 22:11:54.538 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=4e839774-3b58-4d46-8ffb-d7e99901e384] 2025-03-22 22:11:54.875957 | orchestrator | 22:11:54.875 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=eeb28d01-b19b-4054-bd5b-58795a997391] 2025-03-22 22:12:04.097991 | orchestrator | 22:12:04.097 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-03-22 22:12:04.853795 | orchestrator | 22:12:04.853 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=da5855f4-30ac-4c0a-bbbf-1f44dbf26e01] 2025-03-22 22:12:04.875734 | orchestrator | 22:12:04.875 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-03-22 22:12:04.878478 | orchestrator | 22:12:04.878 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=842742512391637005] 2025-03-22 22:12:04.899942 | orchestrator | 22:12:04.899 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-03-22 22:12:04.900385 | orchestrator | 22:12:04.900 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-03-22 22:12:04.900411 | orchestrator | 22:12:04.900 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-03-22 22:12:04.900473 | orchestrator | 22:12:04.900 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-03-22 22:12:04.901928 | orchestrator | 22:12:04.901 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-03-22 22:12:04.903722 | orchestrator | 22:12:04.903 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-03-22 22:12:04.906503 | orchestrator | 22:12:04.906 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-03-22 22:12:04.907076 | orchestrator | 22:12:04.907 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-03-22 22:12:04.909846 | orchestrator | 22:12:04.909 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-03-22 22:12:04.914643 | orchestrator | 22:12:04.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-03-22 22:12:10.221717 | orchestrator | 22:12:10.221 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=eeb28d01-b19b-4054-bd5b-58795a997391/708adc92-3837-440f-909c-446edf0d18e7] 2025-03-22 22:12:10.229250 | orchestrator | 22:12:10.228 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=4e839774-3b58-4d46-8ffb-d7e99901e384/af10e111-d90b-4be1-a196-da98d242bbc6] 2025-03-22 22:12:10.232755 | orchestrator | 22:12:10.232 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-03-22 22:12:10.243631 | orchestrator | 22:12:10.243 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=0e3d7321-682d-4ea1-9180-80bac4331eef/7d111a90-4829-481c-9373-d6d983f6493f] 2025-03-22 22:12:10.245731 | orchestrator | 22:12:10.245 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-03-22 22:12:10.252525 | orchestrator | 22:12:10.252 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=eeb28d01-b19b-4054-bd5b-58795a997391/34749356-9908-4430-b6a3-abe4e540ecc5] 2025-03-22 22:12:10.254527 | orchestrator | 22:12:10.254 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-03-22 22:12:10.260787 | orchestrator | 22:12:10.260 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-03-22 22:12:10.270403 | orchestrator | 22:12:10.270 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=ec3ab099-f9e8-4d3d-ad36-5c98ea5cb15e/036a8c60-8400-4952-a958-bb8a1eba60c8] 2025-03-22 22:12:10.274609 | orchestrator | 22:12:10.274 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=0e3d7321-682d-4ea1-9180-80bac4331eef/2cb08140-af63-48ba-9aa2-c694b8b8e9ae] 2025-03-22 22:12:10.276910 | orchestrator | 22:12:10.276 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-03-22 22:12:10.289730 | orchestrator | 22:12:10.289 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-03-22 22:12:10.309970 | orchestrator | 22:12:10.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=ec3ab099-f9e8-4d3d-ad36-5c98ea5cb15e/57690c98-8cea-4402-9842-e7701133b4c4] 2025-03-22 22:12:10.323036 | orchestrator | 22:12:10.322 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-03-22 22:12:10.400126 | orchestrator | 22:12:10.399 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=586e7cce-cf8d-4d2c-b8f9-3ed910395e5c/0dd26abc-84c4-4e20-a660-b25ef0be7791] 2025-03-22 22:12:10.404368 | orchestrator | 22:12:10.403 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=da5855f4-30ac-4c0a-bbbf-1f44dbf26e01/e5c5a971-0b0d-41c6-9ff1-2abaf7ae3fd0] 2025-03-22 22:12:10.415051 | orchestrator | 22:12:10.414 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-03-22 22:12:10.428550 | orchestrator | 22:12:10.428 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-03-22 22:12:10.520651 | orchestrator | 22:12:10.520 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 6s [id=da5855f4-30ac-4c0a-bbbf-1f44dbf26e01/c7efb8bb-3838-4312-aa24-d1dd239694a4] 2025-03-22 22:12:15.551489 | orchestrator | 22:12:15.550 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=4e839774-3b58-4d46-8ffb-d7e99901e384/18369482-6d33-4fed-9778-d084c11eaa5e] 2025-03-22 22:12:15.576906 | orchestrator | 22:12:15.576 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=da5855f4-30ac-4c0a-bbbf-1f44dbf26e01/7e3894b5-9c16-4e11-bc6e-9b4e89d3d75f] 2025-03-22 22:12:15.593619 | orchestrator | 22:12:15.593 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=0e3d7321-682d-4ea1-9180-80bac4331eef/2e00c2ca-00f1-4fee-ad68-d47920d8c405] 2025-03-22 22:12:15.605091 | orchestrator | 22:12:15.604 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=586e7cce-cf8d-4d2c-b8f9-3ed910395e5c/eb7c221e-d8a1-4ef7-8d99-2a1292a3f844] 2025-03-22 22:12:15.613471 | orchestrator | 22:12:15.613 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=ec3ab099-f9e8-4d3d-ad36-5c98ea5cb15e/b423c274-b2a0-4f0a-b616-ca1c2b60d0cd] 2025-03-22 22:12:15.626816 | orchestrator | 22:12:15.626 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=586e7cce-cf8d-4d2c-b8f9-3ed910395e5c/4dc307ff-74e9-4f4c-930f-bbccca46b507] 2025-03-22 22:12:15.637846 | orchestrator | 22:12:15.637 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=4e839774-3b58-4d46-8ffb-d7e99901e384/753a6438-823e-47df-a447-41be61353e18] 2025-03-22 22:12:15.708805 | orchestrator | 22:12:15.708 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=eeb28d01-b19b-4054-bd5b-58795a997391/873c6414-afc7-40f1-8cf8-9106a041fae2] 2025-03-22 22:12:20.429521 | orchestrator | 22:12:20.429 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-03-22 22:12:30.432507 | orchestrator | 22:12:30.432 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-03-22 22:12:31.051818 | orchestrator | 22:12:31.051 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=daecad4d-bafb-442b-ae4e-0950a8a65f14] 2025-03-22 22:12:31.077140 | orchestrator | 22:12:31.076 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-03-22 22:12:31.077252 | orchestrator | 22:12:31.077 STDOUT terraform: Outputs: 2025-03-22 22:12:31.077272 | orchestrator | 22:12:31.077 STDOUT terraform: manager_address = 2025-03-22 22:12:31.084771 | orchestrator | 22:12:31.077 STDOUT terraform: private_key = 2025-03-22 22:12:31.587455 | orchestrator | changed 2025-03-22 22:12:31.633729 | 2025-03-22 22:12:31.633931 | TASK [Create infrastructure (stable)] 2025-03-22 22:12:31.734673 | orchestrator | skipping: Conditional result was False 2025-03-22 22:12:31.753691 | 2025-03-22 22:12:31.753825 | TASK [Fetch manager address] 2025-03-22 22:12:42.185273 | orchestrator | ok 2025-03-22 22:12:42.194458 | 2025-03-22 22:12:42.194567 | TASK [Set manager_host address] 2025-03-22 22:12:42.295013 | orchestrator | ok 2025-03-22 22:12:42.306817 | 2025-03-22 22:12:42.306925 | LOOP [Update ansible collections] 2025-03-22 22:12:42.977594 | orchestrator | changed 2025-03-22 22:12:43.670508 | orchestrator | changed 2025-03-22 22:12:43.695777 | 2025-03-22 22:12:43.695903 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-22 22:12:54.204965 | orchestrator | ok 2025-03-22 22:12:54.218741 | 2025-03-22 22:12:54.218856 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-22 22:13:54.271706 | orchestrator | ok 2025-03-22 22:13:54.283156 | 2025-03-22 22:13:54.283260 | TASK [Fetch manager ssh hostkey] 2025-03-22 22:13:55.358455 | orchestrator | Output suppressed because no_log was given 2025-03-22 22:13:55.376271 | 2025-03-22 22:13:55.376507 | TASK [Get ssh keypair from terraform environment] 2025-03-22 22:13:55.919629 | orchestrator | changed 2025-03-22 22:13:55.936214 | 2025-03-22 22:13:55.936339 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-22 22:13:55.970811 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-03-22 22:13:55.980106 | 2025-03-22 22:13:55.980207 | TASK [Run manager part 0] 2025-03-22 22:13:56.786147 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-22 22:13:56.826668 | orchestrator | 2025-03-22 22:13:58.895562 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-03-22 22:13:58.895637 | orchestrator | 2025-03-22 22:13:58.895676 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-03-22 22:13:58.895703 | orchestrator | ok: [testbed-manager] 2025-03-22 22:14:00.921766 | orchestrator | 2025-03-22 22:14:00.921821 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-22 22:14:00.921833 | orchestrator | 2025-03-22 22:14:00.921839 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:14:00.921852 | orchestrator | ok: [testbed-manager] 2025-03-22 22:14:01.632923 | orchestrator | 2025-03-22 22:14:01.633098 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-22 22:14:01.633133 | orchestrator | ok: [testbed-manager] 2025-03-22 22:14:01.680117 | orchestrator | 2025-03-22 22:14:01.680158 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-22 22:14:01.680184 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:01.704116 | orchestrator | 2025-03-22 22:14:01.704147 | orchestrator | TASK [Update package cache] **************************************************** 2025-03-22 22:14:01.704159 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:01.726740 | orchestrator | 2025-03-22 22:14:01.726772 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-22 22:14:01.726784 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:01.745111 | orchestrator | 2025-03-22 22:14:01.745140 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-22 22:14:01.745151 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:01.768806 | orchestrator | 2025-03-22 22:14:01.768834 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-22 22:14:01.768844 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:01.790802 | orchestrator | 2025-03-22 22:14:01.790831 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-03-22 22:14:01.790841 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:01.810876 | orchestrator | 2025-03-22 22:14:01.810904 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-03-22 22:14:01.810917 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:14:02.679549 | orchestrator | 2025-03-22 22:14:02.679602 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-03-22 22:14:02.679623 | orchestrator | changed: [testbed-manager] 2025-03-22 22:16:50.669226 | orchestrator | 2025-03-22 22:16:50.669307 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-03-22 22:16:50.669346 | orchestrator | changed: [testbed-manager] 2025-03-22 22:18:11.702868 | orchestrator | 2025-03-22 22:18:11.702918 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-22 22:18:11.702936 | orchestrator | changed: [testbed-manager] 2025-03-22 22:18:33.735739 | orchestrator | 2025-03-22 22:18:33.735849 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-22 22:18:33.735884 | orchestrator | changed: [testbed-manager] 2025-03-22 22:18:44.617090 | orchestrator | 2025-03-22 22:18:44.617211 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-22 22:18:44.617248 | orchestrator | changed: [testbed-manager] 2025-03-22 22:18:44.659946 | orchestrator | 2025-03-22 22:18:44.660036 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-22 22:18:44.660080 | orchestrator | ok: [testbed-manager] 2025-03-22 22:18:45.507076 | orchestrator | 2025-03-22 22:18:45.507127 | orchestrator | TASK [Get current user] ******************************************************** 2025-03-22 22:18:45.507172 | orchestrator | ok: [testbed-manager] 2025-03-22 22:18:46.279705 | orchestrator | 2025-03-22 22:18:46.279787 | orchestrator | TASK [Create venv directory] *************************************************** 2025-03-22 22:18:46.279822 | orchestrator | changed: [testbed-manager] 2025-03-22 22:18:54.463695 | orchestrator | 2025-03-22 22:18:54.463782 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-03-22 22:18:54.463815 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:01.739099 | orchestrator | 2025-03-22 22:19:01.739236 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-03-22 22:19:01.739290 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:04.893240 | orchestrator | 2025-03-22 22:19:04.893292 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-03-22 22:19:04.893311 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:07.003712 | orchestrator | 2025-03-22 22:19:07.003816 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-03-22 22:19:07.003853 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:08.210754 | orchestrator | 2025-03-22 22:19:08.210802 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-03-22 22:19:08.210819 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-22 22:19:08.250691 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-22 22:19:08.250783 | orchestrator | 2025-03-22 22:19:08.250806 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-03-22 22:19:08.250833 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-22 22:19:11.561072 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-22 22:19:11.561121 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-22 22:19:11.561131 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-22 22:19:11.561170 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-22 22:19:12.164581 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-22 22:19:12.164682 | orchestrator | 2025-03-22 22:19:12.164703 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-03-22 22:19:12.164733 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:39.120979 | orchestrator | 2025-03-22 22:19:39.121092 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-03-22 22:19:39.121128 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-03-22 22:19:41.728811 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-03-22 22:19:41.728901 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-03-22 22:19:41.728919 | orchestrator | 2025-03-22 22:19:41.728936 | orchestrator | TASK [Install local collections] *********************************************** 2025-03-22 22:19:41.728963 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-03-22 22:19:43.152028 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-03-22 22:19:43.152127 | orchestrator | 2025-03-22 22:19:43.152166 | orchestrator | PLAY [Create operator user] **************************************************** 2025-03-22 22:19:43.152182 | orchestrator | 2025-03-22 22:19:43.152197 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:19:43.152227 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:43.200401 | orchestrator | 2025-03-22 22:19:43.200460 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-22 22:19:43.200480 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:43.266220 | orchestrator | 2025-03-22 22:19:43.266299 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-22 22:19:43.266330 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:44.037836 | orchestrator | 2025-03-22 22:19:44.038612 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-22 22:19:44.038660 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:44.792663 | orchestrator | 2025-03-22 22:19:44.792760 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-22 22:19:44.792795 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:46.215213 | orchestrator | 2025-03-22 22:19:46.215293 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-22 22:19:46.215325 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-03-22 22:19:47.681730 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-03-22 22:19:47.681828 | orchestrator | 2025-03-22 22:19:47.681848 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-22 22:19:47.681880 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:49.456029 | orchestrator | 2025-03-22 22:19:49.456841 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-22 22:19:49.456885 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:19:50.038896 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-03-22 22:19:50.038989 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:19:50.039010 | orchestrator | 2025-03-22 22:19:50.039026 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-22 22:19:50.039056 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:50.106720 | orchestrator | 2025-03-22 22:19:50.106813 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-22 22:19:50.106845 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:19:51.018081 | orchestrator | 2025-03-22 22:19:51.018203 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-22 22:19:51.018239 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:19:51.058913 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:51.058982 | orchestrator | 2025-03-22 22:19:51.058993 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-22 22:19:51.059016 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:19:51.091677 | orchestrator | 2025-03-22 22:19:51.091736 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-22 22:19:51.091754 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:19:51.119593 | orchestrator | 2025-03-22 22:19:51.119651 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-22 22:19:51.119668 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:19:51.162926 | orchestrator | 2025-03-22 22:19:51.163018 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-22 22:19:51.163053 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:19:51.930936 | orchestrator | 2025-03-22 22:19:51.931023 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-22 22:19:51.931058 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:53.345563 | orchestrator | 2025-03-22 22:19:53.345654 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-22 22:19:53.345673 | orchestrator | 2025-03-22 22:19:53.345688 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:19:53.345717 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:54.436405 | orchestrator | 2025-03-22 22:19:54.436496 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-03-22 22:19:54.436529 | orchestrator | changed: [testbed-manager] 2025-03-22 22:19:54.536750 | orchestrator | 2025-03-22 22:19:54.536823 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:19:54.536852 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-03-22 22:19:54.536870 | orchestrator | 2025-03-22 22:19:54.713011 | orchestrator | changed 2025-03-22 22:19:54.732670 | 2025-03-22 22:19:54.732791 | TASK [Point out that the log in on the manager is now possible] 2025-03-22 22:19:54.778587 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-03-22 22:19:54.787786 | 2025-03-22 22:19:54.787881 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-22 22:19:54.831405 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-03-22 22:19:54.840816 | 2025-03-22 22:19:54.840915 | TASK [Run manager part 1 + 2] 2025-03-22 22:19:55.651303 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-22 22:19:55.701736 | orchestrator | 2025-03-22 22:19:58.381170 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-03-22 22:19:58.381214 | orchestrator | 2025-03-22 22:19:58.381234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:19:58.381249 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:58.414402 | orchestrator | 2025-03-22 22:19:58.414457 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-22 22:19:58.414482 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:19:58.449356 | orchestrator | 2025-03-22 22:19:58.449400 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-22 22:19:58.449417 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:58.483288 | orchestrator | 2025-03-22 22:19:58.483327 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-22 22:19:58.483343 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:58.537370 | orchestrator | 2025-03-22 22:19:58.537409 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-22 22:19:58.537426 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:58.588506 | orchestrator | 2025-03-22 22:19:58.588553 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-22 22:19:58.588572 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:58.625891 | orchestrator | 2025-03-22 22:19:58.625925 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-22 22:19:58.625937 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-03-22 22:19:59.363691 | orchestrator | 2025-03-22 22:19:59.363747 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-22 22:19:59.363767 | orchestrator | ok: [testbed-manager] 2025-03-22 22:19:59.409200 | orchestrator | 2025-03-22 22:19:59.409244 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-22 22:19:59.409261 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:20:00.723879 | orchestrator | 2025-03-22 22:20:00.723940 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-22 22:20:00.723964 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:01.331686 | orchestrator | 2025-03-22 22:20:01.331737 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-22 22:20:01.331755 | orchestrator | ok: [testbed-manager] 2025-03-22 22:20:02.456087 | orchestrator | 2025-03-22 22:20:02.456197 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-22 22:20:02.456228 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:16.584898 | orchestrator | 2025-03-22 22:20:16.585015 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-22 22:20:16.585052 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:17.319173 | orchestrator | 2025-03-22 22:20:17.319280 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-22 22:20:17.319316 | orchestrator | ok: [testbed-manager] 2025-03-22 22:20:17.375605 | orchestrator | 2025-03-22 22:20:17.375681 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-22 22:20:17.375708 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:20:18.372879 | orchestrator | 2025-03-22 22:20:18.372980 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-03-22 22:20:18.373015 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:19.399908 | orchestrator | 2025-03-22 22:20:19.400007 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-03-22 22:20:19.400038 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:19.969452 | orchestrator | 2025-03-22 22:20:19.969534 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-03-22 22:20:19.969561 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:20.007604 | orchestrator | 2025-03-22 22:20:20.007658 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-03-22 22:20:20.007674 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-22 22:20:22.339801 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-22 22:20:22.339912 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-22 22:20:22.339933 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-22 22:20:22.339966 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:32.877415 | orchestrator | 2025-03-22 22:20:32.877528 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-03-22 22:20:32.877564 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-03-22 22:20:34.058698 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-03-22 22:20:34.058798 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-03-22 22:20:34.058817 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-03-22 22:20:34.058834 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-03-22 22:20:34.058848 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-03-22 22:20:34.058863 | orchestrator | 2025-03-22 22:20:34.058878 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-03-22 22:20:34.058922 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:34.099168 | orchestrator | 2025-03-22 22:20:34.099225 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-03-22 22:20:34.099245 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:20:36.806854 | orchestrator | 2025-03-22 22:20:36.806959 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-03-22 22:20:36.806994 | orchestrator | changed: [testbed-manager] 2025-03-22 22:20:36.846891 | orchestrator | 2025-03-22 22:20:36.846970 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-03-22 22:20:36.846998 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:22:27.086168 | orchestrator | 2025-03-22 22:22:27.086276 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-03-22 22:22:27.086313 | orchestrator | changed: [testbed-manager] 2025-03-22 22:22:28.396427 | orchestrator | 2025-03-22 22:22:28.396496 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-22 22:22:28.396525 | orchestrator | ok: [testbed-manager] 2025-03-22 22:22:28.492368 | orchestrator | 2025-03-22 22:22:28.492434 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:22:28.492443 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-03-22 22:22:28.492449 | orchestrator | 2025-03-22 22:22:28.969873 | orchestrator | changed 2025-03-22 22:22:28.989248 | 2025-03-22 22:22:28.989369 | TASK [Reboot manager] 2025-03-22 22:22:30.566151 | orchestrator | changed 2025-03-22 22:22:30.584469 | 2025-03-22 22:22:30.584621 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-22 22:22:46.975658 | orchestrator | ok 2025-03-22 22:22:46.987089 | 2025-03-22 22:22:46.987212 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-22 22:23:47.030264 | orchestrator | ok 2025-03-22 22:23:47.041326 | 2025-03-22 22:23:47.041452 | TASK [Deploy manager + bootstrap nodes] 2025-03-22 22:23:49.629441 | orchestrator | 2025-03-22 22:23:49.633484 | orchestrator | # DEPLOY MANAGER 2025-03-22 22:23:49.633550 | orchestrator | 2025-03-22 22:23:49.633578 | orchestrator | + set -e 2025-03-22 22:23:49.633646 | orchestrator | + echo 2025-03-22 22:23:49.633674 | orchestrator | + echo '# DEPLOY MANAGER' 2025-03-22 22:23:49.633701 | orchestrator | + echo 2025-03-22 22:23:49.633736 | orchestrator | + cat /opt/manager-vars.sh 2025-03-22 22:23:49.633783 | orchestrator | export NUMBER_OF_NODES=6 2025-03-22 22:23:49.633810 | orchestrator | 2025-03-22 22:23:49.633833 | orchestrator | export CEPH_VERSION=quincy 2025-03-22 22:23:49.633855 | orchestrator | export CONFIGURATION_VERSION=main 2025-03-22 22:23:49.633878 | orchestrator | export MANAGER_VERSION=latest 2025-03-22 22:23:49.633900 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-03-22 22:23:49.633923 | orchestrator | 2025-03-22 22:23:49.633946 | orchestrator | export ARA=false 2025-03-22 22:23:49.633969 | orchestrator | export TEMPEST=false 2025-03-22 22:23:49.633991 | orchestrator | export IS_ZUUL=true 2025-03-22 22:23:49.634106 | orchestrator | 2025-03-22 22:23:49.634166 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:23:49.634190 | orchestrator | export EXTERNAL_API=false 2025-03-22 22:23:49.634211 | orchestrator | 2025-03-22 22:23:49.634232 | orchestrator | export IMAGE_USER=ubuntu 2025-03-22 22:23:49.634253 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-03-22 22:23:49.634277 | orchestrator | 2025-03-22 22:23:49.634299 | orchestrator | export CEPH_STACK=ceph-ansible 2025-03-22 22:23:49.634328 | orchestrator | 2025-03-22 22:23:49.634908 | orchestrator | + echo 2025-03-22 22:23:49.634950 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-22 22:23:49.634984 | orchestrator | ++ export INTERACTIVE=false 2025-03-22 22:23:49.635068 | orchestrator | ++ INTERACTIVE=false 2025-03-22 22:23:49.635092 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-22 22:23:49.635126 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-22 22:23:49.635182 | orchestrator | + source /opt/manager-vars.sh 2025-03-22 22:23:49.635204 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-22 22:23:49.635231 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-22 22:23:49.635252 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-22 22:23:49.635274 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-22 22:23:49.635302 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-22 22:23:49.690167 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-22 22:23:49.690270 | orchestrator | ++ export MANAGER_VERSION=latest 2025-03-22 22:23:49.690286 | orchestrator | ++ MANAGER_VERSION=latest 2025-03-22 22:23:49.690298 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-22 22:23:49.690310 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-22 22:23:49.690321 | orchestrator | ++ export ARA=false 2025-03-22 22:23:49.690332 | orchestrator | ++ ARA=false 2025-03-22 22:23:49.690344 | orchestrator | ++ export TEMPEST=false 2025-03-22 22:23:49.690355 | orchestrator | ++ TEMPEST=false 2025-03-22 22:23:49.690366 | orchestrator | ++ export IS_ZUUL=true 2025-03-22 22:23:49.690377 | orchestrator | ++ IS_ZUUL=true 2025-03-22 22:23:49.690389 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:23:49.690401 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:23:49.690418 | orchestrator | ++ export EXTERNAL_API=false 2025-03-22 22:23:49.690430 | orchestrator | ++ EXTERNAL_API=false 2025-03-22 22:23:49.690441 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-22 22:23:49.690452 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-22 22:23:49.690463 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-22 22:23:49.690475 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-22 22:23:49.690488 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-22 22:23:49.690500 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-22 22:23:49.690511 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-03-22 22:23:49.690543 | orchestrator | + docker version 2025-03-22 22:23:49.951072 | orchestrator | Client: Docker Engine - Community 2025-03-22 22:23:49.954687 | orchestrator | Version: 27.5.1 2025-03-22 22:23:49.954716 | orchestrator | API version: 1.47 2025-03-22 22:23:49.954728 | orchestrator | Go version: go1.22.11 2025-03-22 22:23:49.954741 | orchestrator | Git commit: 9f9e405 2025-03-22 22:23:49.954753 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-03-22 22:23:49.954766 | orchestrator | OS/Arch: linux/amd64 2025-03-22 22:23:49.954778 | orchestrator | Context: default 2025-03-22 22:23:49.954790 | orchestrator | 2025-03-22 22:23:49.954802 | orchestrator | Server: Docker Engine - Community 2025-03-22 22:23:49.954814 | orchestrator | Engine: 2025-03-22 22:23:49.954826 | orchestrator | Version: 27.5.1 2025-03-22 22:23:49.954838 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-03-22 22:23:49.954850 | orchestrator | Go version: go1.22.11 2025-03-22 22:23:49.954864 | orchestrator | Git commit: 4c9b3b0 2025-03-22 22:23:49.954902 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-03-22 22:23:49.954915 | orchestrator | OS/Arch: linux/amd64 2025-03-22 22:23:49.954927 | orchestrator | Experimental: false 2025-03-22 22:23:49.954938 | orchestrator | containerd: 2025-03-22 22:23:49.954950 | orchestrator | Version: 1.7.25 2025-03-22 22:23:49.954962 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-03-22 22:23:49.954974 | orchestrator | runc: 2025-03-22 22:23:49.954986 | orchestrator | Version: 1.2.4 2025-03-22 22:23:49.954998 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-03-22 22:23:49.955010 | orchestrator | docker-init: 2025-03-22 22:23:49.955022 | orchestrator | Version: 0.19.0 2025-03-22 22:23:49.955034 | orchestrator | GitCommit: de40ad0 2025-03-22 22:23:49.955053 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-03-22 22:23:49.965469 | orchestrator | + set -e 2025-03-22 22:23:49.965927 | orchestrator | + source /opt/manager-vars.sh 2025-03-22 22:23:49.965964 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-22 22:23:49.965979 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-22 22:23:49.965993 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-22 22:23:49.966007 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-22 22:23:49.966060 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-22 22:23:49.966074 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-22 22:23:49.966088 | orchestrator | ++ export MANAGER_VERSION=latest 2025-03-22 22:23:49.966103 | orchestrator | ++ MANAGER_VERSION=latest 2025-03-22 22:23:49.966117 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-22 22:23:49.966158 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-22 22:23:49.966181 | orchestrator | ++ export ARA=false 2025-03-22 22:23:49.966202 | orchestrator | ++ ARA=false 2025-03-22 22:23:49.966217 | orchestrator | ++ export TEMPEST=false 2025-03-22 22:23:49.966230 | orchestrator | ++ TEMPEST=false 2025-03-22 22:23:49.966244 | orchestrator | ++ export IS_ZUUL=true 2025-03-22 22:23:49.966258 | orchestrator | ++ IS_ZUUL=true 2025-03-22 22:23:49.966273 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:23:49.966287 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:23:49.966301 | orchestrator | ++ export EXTERNAL_API=false 2025-03-22 22:23:49.966315 | orchestrator | ++ EXTERNAL_API=false 2025-03-22 22:23:49.966329 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-22 22:23:49.966349 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-22 22:23:49.966363 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-22 22:23:49.966377 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-22 22:23:49.966391 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-22 22:23:49.966405 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-22 22:23:49.966419 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-22 22:23:49.966433 | orchestrator | ++ export INTERACTIVE=false 2025-03-22 22:23:49.966447 | orchestrator | ++ INTERACTIVE=false 2025-03-22 22:23:49.966461 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-22 22:23:49.966475 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-22 22:23:49.966495 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-03-22 22:23:49.971705 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-22 22:23:49.971727 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-03-22 22:23:49.971746 | orchestrator | + set -e 2025-03-22 22:23:49.972202 | orchestrator | + VERSION=quincy 2025-03-22 22:23:49.972232 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-03-22 22:23:49.978198 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-03-22 22:23:49.982887 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-03-22 22:23:49.982922 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-03-22 22:23:49.988867 | orchestrator | + set -e 2025-03-22 22:23:49.989006 | orchestrator | + VERSION=2024.1 2025-03-22 22:23:49.989030 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-03-22 22:23:49.991179 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-03-22 22:23:49.994911 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-03-22 22:23:49.994939 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-03-22 22:23:49.995930 | orchestrator | ++ semver latest 7.0.0 2025-03-22 22:23:50.045447 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-22 22:23:50.078183 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-22 22:23:50.078209 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-03-22 22:23:50.078224 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-03-22 22:23:50.078264 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-22 22:23:50.081652 | orchestrator | + source /opt/venv/bin/activate 2025-03-22 22:23:50.082787 | orchestrator | ++ deactivate nondestructive 2025-03-22 22:23:50.082810 | orchestrator | ++ '[' -n '' ']' 2025-03-22 22:23:50.082824 | orchestrator | ++ '[' -n '' ']' 2025-03-22 22:23:50.082839 | orchestrator | ++ hash -r 2025-03-22 22:23:50.082854 | orchestrator | ++ '[' -n '' ']' 2025-03-22 22:23:50.082867 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-22 22:23:50.082881 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-22 22:23:50.082900 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-22 22:23:50.083022 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-22 22:23:50.083041 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-22 22:23:50.083055 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-22 22:23:50.083069 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-22 22:23:50.083084 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-22 22:23:50.083098 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-22 22:23:50.083112 | orchestrator | ++ export PATH 2025-03-22 22:23:50.083152 | orchestrator | ++ '[' -n '' ']' 2025-03-22 22:23:50.083248 | orchestrator | ++ '[' -z '' ']' 2025-03-22 22:23:50.083267 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-22 22:23:50.083281 | orchestrator | ++ PS1='(venv) ' 2025-03-22 22:23:50.083295 | orchestrator | ++ export PS1 2025-03-22 22:23:50.083309 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-22 22:23:50.083323 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-22 22:23:50.083337 | orchestrator | ++ hash -r 2025-03-22 22:23:50.083356 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-03-22 22:23:51.556932 | orchestrator | 2025-03-22 22:23:52.204738 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-03-22 22:23:52.204866 | orchestrator | 2025-03-22 22:23:52.204922 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-22 22:23:52.204957 | orchestrator | ok: [testbed-manager] 2025-03-22 22:23:53.327347 | orchestrator | 2025-03-22 22:23:53.327461 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-22 22:23:53.327497 | orchestrator | changed: [testbed-manager] 2025-03-22 22:23:56.061421 | orchestrator | 2025-03-22 22:23:56.061526 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-03-22 22:23:56.061540 | orchestrator | 2025-03-22 22:23:56.061551 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:23:56.061576 | orchestrator | ok: [testbed-manager] 2025-03-22 22:24:02.860812 | orchestrator | 2025-03-22 22:24:02.860946 | orchestrator | TASK [Pull images] ************************************************************* 2025-03-22 22:24:02.860986 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-03-22 22:24:59.544476 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.7.2) 2025-03-22 22:24:59.544637 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:quincy) 2025-03-22 22:24:59.544669 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-03-22 22:24:59.544694 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.1) 2025-03-22 22:24:59.544720 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-03-22 22:24:59.544745 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.10) 2025-03-22 22:24:59.544771 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-03-22 22:24:59.544797 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-03-22 22:24:59.544823 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.8-alpine) 2025-03-22 22:24:59.544849 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.4) 2025-03-22 22:24:59.544875 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.19.0) 2025-03-22 22:24:59.544900 | orchestrator | 2025-03-22 22:24:59.544925 | orchestrator | TASK [Check status] ************************************************************ 2025-03-22 22:24:59.545007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-22 22:24:59.601070 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-22 22:24:59.601139 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j56937641679.1522', 'results_file': '/home/dragon/.ansible_async/j56937641679.1522', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601198 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j862339806356.1547', 'results_file': '/home/dragon/.ansible_async/j862339806356.1547', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601214 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-22 22:24:59.601236 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j792199252194.1572', 'results_file': '/home/dragon/.ansible_async/j792199252194.1572', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601251 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j965478098352.1605', 'results_file': '/home/dragon/.ansible_async/j965478098352.1605', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601270 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-22 22:24:59.601284 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j383338675981.1638', 'results_file': '/home/dragon/.ansible_async/j383338675981.1638', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601298 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j88770276639.1671', 'results_file': '/home/dragon/.ansible_async/j88770276639.1671', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601312 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j317066635145.1710', 'results_file': '/home/dragon/.ansible_async/j317066635145.1710', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.10', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601327 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-22 22:24:59.601341 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j663696051.1735', 'results_file': '/home/dragon/.ansible_async/j663696051.1735', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601355 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j272970178739.1768', 'results_file': '/home/dragon/.ansible_async/j272970178739.1768', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601369 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j609424658962.1801', 'results_file': '/home/dragon/.ansible_async/j609424658962.1801', 'changed': True, 'item': 'index.docker.io/library/postgres:16.8-alpine', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601383 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j966291249383.1832', 'results_file': '/home/dragon/.ansible_async/j966291249383.1832', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.4', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601397 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j423237359339.1864', 'results_file': '/home/dragon/.ansible_async/j423237359339.1864', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.19.0', 'ansible_loop_var': 'item'}) 2025-03-22 22:24:59.601433 | orchestrator | 2025-03-22 22:24:59.601448 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-03-22 22:24:59.601475 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:00.167997 | orchestrator | 2025-03-22 22:25:00.168094 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-03-22 22:25:00.168125 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:00.519444 | orchestrator | 2025-03-22 22:25:00.519563 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-03-22 22:25:00.519601 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:00.865289 | orchestrator | 2025-03-22 22:25:00.865379 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-22 22:25:00.865411 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:00.911267 | orchestrator | 2025-03-22 22:25:00.911305 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-03-22 22:25:00.911349 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:25:01.288087 | orchestrator | 2025-03-22 22:25:01.288257 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-03-22 22:25:01.288296 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:01.470325 | orchestrator | 2025-03-22 22:25:01.470445 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-03-22 22:25:01.470483 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:25:03.634483 | orchestrator | 2025-03-22 22:25:03.634610 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-03-22 22:25:03.634630 | orchestrator | 2025-03-22 22:25:03.634646 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:25:03.634678 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:03.886815 | orchestrator | 2025-03-22 22:25:03.886928 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-03-22 22:25:03.886965 | orchestrator | 2025-03-22 22:25:03.996555 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-03-22 22:25:03.996654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-03-22 22:25:05.235905 | orchestrator | 2025-03-22 22:25:05.236012 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-03-22 22:25:05.236043 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-03-22 22:25:07.336537 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-03-22 22:25:07.336642 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-03-22 22:25:07.336661 | orchestrator | 2025-03-22 22:25:07.336674 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-03-22 22:25:07.336702 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-03-22 22:25:08.096383 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-03-22 22:25:08.096472 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-03-22 22:25:08.096487 | orchestrator | 2025-03-22 22:25:08.096500 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-03-22 22:25:08.096529 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:25:08.802687 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:08.802776 | orchestrator | 2025-03-22 22:25:08.802790 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-03-22 22:25:08.802814 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:25:08.905161 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:08.905228 | orchestrator | 2025-03-22 22:25:08.905241 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-03-22 22:25:08.905260 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:25:09.349941 | orchestrator | 2025-03-22 22:25:09.350070 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-03-22 22:25:09.350097 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:09.463702 | orchestrator | 2025-03-22 22:25:09.463748 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-03-22 22:25:09.463769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-03-22 22:25:10.657352 | orchestrator | 2025-03-22 22:25:10.657448 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-03-22 22:25:10.657475 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:11.667633 | orchestrator | 2025-03-22 22:25:11.667737 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-03-22 22:25:11.667765 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:14.872938 | orchestrator | 2025-03-22 22:25:14.873064 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-03-22 22:25:14.873100 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:15.238519 | orchestrator | 2025-03-22 22:25:15.238621 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-03-22 22:25:15.238654 | orchestrator | 2025-03-22 22:25:15.359314 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-03-22 22:25:15.359387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-03-22 22:25:18.297119 | orchestrator | 2025-03-22 22:25:18.297294 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-03-22 22:25:18.297333 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:18.487759 | orchestrator | 2025-03-22 22:25:18.487863 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-22 22:25:18.487899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-03-22 22:25:19.765165 | orchestrator | 2025-03-22 22:25:19.765282 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-03-22 22:25:19.765303 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-03-22 22:25:19.886882 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-03-22 22:25:19.886960 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-03-22 22:25:19.886966 | orchestrator | 2025-03-22 22:25:19.886972 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-03-22 22:25:19.886988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-03-22 22:25:20.613394 | orchestrator | 2025-03-22 22:25:20.614221 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-03-22 22:25:20.614274 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-03-22 22:25:21.333527 | orchestrator | 2025-03-22 22:25:21.333635 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-22 22:25:21.333670 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:25:21.781913 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:21.782012 | orchestrator | 2025-03-22 22:25:21.782082 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-03-22 22:25:21.782112 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:22.173588 | orchestrator | 2025-03-22 22:25:22.173684 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-03-22 22:25:22.173717 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:22.243310 | orchestrator | 2025-03-22 22:25:22.243381 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-03-22 22:25:22.243420 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:25:22.983295 | orchestrator | 2025-03-22 22:25:22.983394 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-03-22 22:25:22.983425 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:23.158765 | orchestrator | 2025-03-22 22:25:23.158836 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-22 22:25:23.158869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-03-22 22:25:24.042431 | orchestrator | 2025-03-22 22:25:24.042583 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-03-22 22:25:24.042622 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-03-22 22:25:24.806653 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-03-22 22:25:24.806743 | orchestrator | 2025-03-22 22:25:24.806760 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-03-22 22:25:24.806801 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-03-22 22:25:25.557530 | orchestrator | 2025-03-22 22:25:25.557634 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-03-22 22:25:25.557668 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:25.612403 | orchestrator | 2025-03-22 22:25:25.613214 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-03-22 22:25:25.613257 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:25:26.314668 | orchestrator | 2025-03-22 22:25:26.314772 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-03-22 22:25:26.314803 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:28.338004 | orchestrator | 2025-03-22 22:25:28.338227 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-22 22:25:28.338265 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:25:35.037623 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:25:35.037759 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:25:35.037780 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:35.037799 | orchestrator | 2025-03-22 22:25:35.037814 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-03-22 22:25:35.037847 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-03-22 22:25:35.795357 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-03-22 22:25:35.795465 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-03-22 22:25:35.795481 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-03-22 22:25:35.795496 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-03-22 22:25:35.795514 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-03-22 22:25:35.795529 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-03-22 22:25:35.795543 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-03-22 22:25:35.795557 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-03-22 22:25:35.795571 | orchestrator | changed: [testbed-manager] => (item=users) 2025-03-22 22:25:35.795585 | orchestrator | 2025-03-22 22:25:35.795599 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-03-22 22:25:35.795629 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-03-22 22:25:36.023228 | orchestrator | 2025-03-22 22:25:36.023305 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-03-22 22:25:36.023335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-03-22 22:25:36.821538 | orchestrator | 2025-03-22 22:25:36.821639 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-03-22 22:25:36.821670 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:37.521576 | orchestrator | 2025-03-22 22:25:37.521687 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-03-22 22:25:37.521720 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:38.398913 | orchestrator | 2025-03-22 22:25:38.399018 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-03-22 22:25:38.399050 | orchestrator | changed: [testbed-manager] 2025-03-22 22:25:40.821807 | orchestrator | 2025-03-22 22:25:40.821926 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-03-22 22:25:40.821960 | orchestrator | ok: [testbed-manager] 2025-03-22 22:25:41.865605 | orchestrator | 2025-03-22 22:25:41.865711 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-03-22 22:25:41.865741 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:04.223433 | orchestrator | 2025-03-22 22:26:04.223550 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-03-22 22:26:04.223602 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-03-22 22:26:04.304804 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:04.304828 | orchestrator | 2025-03-22 22:26:04.304839 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-03-22 22:26:04.304854 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:04.363397 | orchestrator | 2025-03-22 22:26:04.363424 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-03-22 22:26:04.363437 | orchestrator | 2025-03-22 22:26:04.363450 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-03-22 22:26:04.363468 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:04.459601 | orchestrator | 2025-03-22 22:26:04.459644 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-22 22:26:04.459669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-03-22 22:26:05.484409 | orchestrator | 2025-03-22 22:26:05.484506 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-03-22 22:26:05.484535 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:05.586269 | orchestrator | 2025-03-22 22:26:05.586316 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-03-22 22:26:05.586341 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:05.663211 | orchestrator | 2025-03-22 22:26:05.663274 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-03-22 22:26:05.663300 | orchestrator | ok: [testbed-manager] => { 2025-03-22 22:26:06.531333 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-03-22 22:26:06.531450 | orchestrator | } 2025-03-22 22:26:06.531470 | orchestrator | 2025-03-22 22:26:06.531502 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-03-22 22:26:06.531544 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:07.711726 | orchestrator | 2025-03-22 22:26:07.711828 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-03-22 22:26:07.711859 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:07.819503 | orchestrator | 2025-03-22 22:26:07.819530 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-03-22 22:26:07.819549 | orchestrator | ok: [testbed-manager] 2025-03-22 22:26:07.889533 | orchestrator | 2025-03-22 22:26:07.889564 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-03-22 22:26:07.889585 | orchestrator | ok: [testbed-manager] => { 2025-03-22 22:26:07.965954 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-03-22 22:26:07.965979 | orchestrator | } 2025-03-22 22:26:07.965993 | orchestrator | 2025-03-22 22:26:07.966005 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-03-22 22:26:07.966061 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:08.042920 | orchestrator | 2025-03-22 22:26:08.042974 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-03-22 22:26:08.042996 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:08.115081 | orchestrator | 2025-03-22 22:26:08.115132 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-03-22 22:26:08.115157 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:08.179949 | orchestrator | 2025-03-22 22:26:08.180059 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-03-22 22:26:08.180095 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:08.244272 | orchestrator | 2025-03-22 22:26:08.244373 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-03-22 22:26:08.244409 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:08.368505 | orchestrator | 2025-03-22 22:26:08.368593 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-03-22 22:26:08.368622 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:26:09.793128 | orchestrator | 2025-03-22 22:26:09.793262 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-22 22:26:09.793297 | orchestrator | changed: [testbed-manager] 2025-03-22 22:26:09.914496 | orchestrator | 2025-03-22 22:26:09.914565 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-03-22 22:26:09.914602 | orchestrator | ok: [testbed-manager] 2025-03-22 22:27:09.985535 | orchestrator | 2025-03-22 22:27:09.985674 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-03-22 22:27:09.985711 | orchestrator | Pausing for 60 seconds 2025-03-22 22:27:10.095540 | orchestrator | changed: [testbed-manager] 2025-03-22 22:27:10.095619 | orchestrator | 2025-03-22 22:27:10.095636 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-03-22 22:27:10.095666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-03-22 22:31:24.207902 | orchestrator | 2025-03-22 22:31:24.208040 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-03-22 22:31:24.208079 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-03-22 22:31:26.561279 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-03-22 22:31:26.561402 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-03-22 22:31:26.561421 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-03-22 22:31:26.561437 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-03-22 22:31:26.561454 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-03-22 22:31:26.561468 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-03-22 22:31:26.561483 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-03-22 22:31:26.561496 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-03-22 22:31:26.561511 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-03-22 22:31:26.561524 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-03-22 22:31:26.561538 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-03-22 22:31:26.561552 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-03-22 22:31:26.561566 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-03-22 22:31:26.561580 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-03-22 22:31:26.561593 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-03-22 22:31:26.561607 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-03-22 22:31:26.561621 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-03-22 22:31:26.561635 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-03-22 22:31:26.561649 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-03-22 22:31:26.561663 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-03-22 22:31:26.561677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-03-22 22:31:26.561690 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-03-22 22:31:26.561704 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-03-22 22:31:26.561719 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:26.561762 | orchestrator | 2025-03-22 22:31:26.561778 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-03-22 22:31:26.561807 | orchestrator | 2025-03-22 22:31:26.561823 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:31:26.561854 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:26.697514 | orchestrator | 2025-03-22 22:31:26.697570 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-03-22 22:31:26.697599 | orchestrator | 2025-03-22 22:31:26.763784 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-03-22 22:31:26.763820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-03-22 22:31:28.889502 | orchestrator | 2025-03-22 22:31:28.889612 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-03-22 22:31:28.889646 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:28.940003 | orchestrator | 2025-03-22 22:31:28.940043 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-03-22 22:31:28.940065 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:29.072164 | orchestrator | 2025-03-22 22:31:29.072307 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-03-22 22:31:29.072341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-03-22 22:31:32.222577 | orchestrator | 2025-03-22 22:31:32.222705 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-03-22 22:31:32.222746 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-03-22 22:31:32.955486 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-03-22 22:31:32.955578 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-03-22 22:31:32.955590 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-03-22 22:31:32.955600 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-03-22 22:31:32.955610 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-03-22 22:31:32.955621 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-03-22 22:31:32.955630 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-03-22 22:31:32.955640 | orchestrator | 2025-03-22 22:31:32.955653 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-03-22 22:31:32.955676 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:33.059354 | orchestrator | 2025-03-22 22:31:33.059399 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-03-22 22:31:33.059421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-03-22 22:31:34.448048 | orchestrator | 2025-03-22 22:31:34.448648 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-03-22 22:31:34.448700 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-03-22 22:31:35.209577 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-03-22 22:31:35.209702 | orchestrator | 2025-03-22 22:31:35.209724 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-03-22 22:31:35.209758 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:35.273362 | orchestrator | 2025-03-22 22:31:35.273398 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-03-22 22:31:35.273424 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:31:35.341638 | orchestrator | 2025-03-22 22:31:35.341724 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-03-22 22:31:35.341757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-03-22 22:31:36.834461 | orchestrator | 2025-03-22 22:31:36.834560 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-03-22 22:31:36.834586 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:31:37.552599 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:31:37.552711 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:37.552759 | orchestrator | 2025-03-22 22:31:37.552777 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-03-22 22:31:37.552808 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:37.652480 | orchestrator | 2025-03-22 22:31:37.652555 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-03-22 22:31:37.652587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-03-22 22:31:38.378765 | orchestrator | 2025-03-22 22:31:38.378874 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-03-22 22:31:38.378906 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:31:39.089929 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:39.090142 | orchestrator | 2025-03-22 22:31:39.090180 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-03-22 22:31:39.090231 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:39.219174 | orchestrator | 2025-03-22 22:31:39.219361 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-03-22 22:31:39.219415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-03-22 22:31:39.969623 | orchestrator | 2025-03-22 22:31:39.969747 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-03-22 22:31:39.969785 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:40.424712 | orchestrator | 2025-03-22 22:31:40.424807 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-03-22 22:31:40.424834 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:41.766057 | orchestrator | 2025-03-22 22:31:41.766178 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-03-22 22:31:41.766216 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-03-22 22:31:42.467591 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-03-22 22:31:42.467697 | orchestrator | 2025-03-22 22:31:42.467716 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-03-22 22:31:42.467748 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:42.843659 | orchestrator | 2025-03-22 22:31:42.843757 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-03-22 22:31:42.843791 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:42.983786 | orchestrator | 2025-03-22 22:31:42.983882 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-03-22 22:31:42.983914 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:31:43.703926 | orchestrator | 2025-03-22 22:31:43.704032 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-03-22 22:31:43.704066 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:43.781107 | orchestrator | 2025-03-22 22:31:43.781180 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-03-22 22:31:43.781210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-03-22 22:31:43.825796 | orchestrator | 2025-03-22 22:31:43.825849 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-03-22 22:31:43.825892 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:46.210322 | orchestrator | 2025-03-22 22:31:46.210441 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-03-22 22:31:46.210476 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-03-22 22:31:47.021932 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-03-22 22:31:47.022095 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-03-22 22:31:47.022114 | orchestrator | 2025-03-22 22:31:47.022129 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-03-22 22:31:47.022161 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:47.096974 | orchestrator | 2025-03-22 22:31:47.097044 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-03-22 22:31:47.097076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-03-22 22:31:47.148792 | orchestrator | 2025-03-22 22:31:47.148843 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-03-22 22:31:47.148871 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:47.897786 | orchestrator | 2025-03-22 22:31:47.897891 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-03-22 22:31:47.897924 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-03-22 22:31:47.988070 | orchestrator | 2025-03-22 22:31:47.988104 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-03-22 22:31:47.988149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-03-22 22:31:48.795812 | orchestrator | 2025-03-22 22:31:48.795912 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-03-22 22:31:48.795943 | orchestrator | changed: [testbed-manager] 2025-03-22 22:31:49.519148 | orchestrator | 2025-03-22 22:31:49.519310 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-03-22 22:31:49.519345 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:49.566155 | orchestrator | 2025-03-22 22:31:49.566219 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-03-22 22:31:49.566278 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:31:49.646402 | orchestrator | 2025-03-22 22:31:49.646459 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-03-22 22:31:49.646486 | orchestrator | ok: [testbed-manager] 2025-03-22 22:31:50.621334 | orchestrator | 2025-03-22 22:31:50.621435 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-03-22 22:31:50.621466 | orchestrator | changed: [testbed-manager] 2025-03-22 22:32:11.872044 | orchestrator | 2025-03-22 22:32:11.872172 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-03-22 22:32:11.872207 | orchestrator | changed: [testbed-manager] 2025-03-22 22:32:12.587068 | orchestrator | 2025-03-22 22:32:12.587183 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-03-22 22:32:12.587218 | orchestrator | ok: [testbed-manager] 2025-03-22 22:32:15.569854 | orchestrator | 2025-03-22 22:32:15.569971 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-03-22 22:32:15.570011 | orchestrator | changed: [testbed-manager] 2025-03-22 22:32:15.632684 | orchestrator | 2025-03-22 22:32:15.632748 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-03-22 22:32:15.632777 | orchestrator | ok: [testbed-manager] 2025-03-22 22:32:15.719082 | orchestrator | 2025-03-22 22:32:15.719142 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-22 22:32:15.719159 | orchestrator | 2025-03-22 22:32:15.719174 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-03-22 22:32:15.719200 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:33:15.774595 | orchestrator | 2025-03-22 22:33:15.774734 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-03-22 22:33:15.774774 | orchestrator | Pausing for 60 seconds 2025-03-22 22:33:22.415024 | orchestrator | changed: [testbed-manager] 2025-03-22 22:33:22.415157 | orchestrator | 2025-03-22 22:33:22.415181 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-03-22 22:33:22.415260 | orchestrator | changed: [testbed-manager] 2025-03-22 22:34:04.608954 | orchestrator | 2025-03-22 22:34:04.609106 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-03-22 22:34:04.609181 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-03-22 22:34:11.887861 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-03-22 22:34:11.888003 | orchestrator | changed: [testbed-manager] 2025-03-22 22:34:11.888036 | orchestrator | 2025-03-22 22:34:11.888064 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-03-22 22:34:11.888101 | orchestrator | changed: [testbed-manager] 2025-03-22 22:34:11.999229 | orchestrator | 2025-03-22 22:34:11.999309 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-03-22 22:34:11.999342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-03-22 22:34:12.058810 | orchestrator | 2025-03-22 22:34:12.058898 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-22 22:34:12.058916 | orchestrator | 2025-03-22 22:34:12.058931 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-03-22 22:34:12.058960 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:34:12.267872 | orchestrator | 2025-03-22 22:34:12.267952 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:34:12.267971 | orchestrator | testbed-manager : ok=103 changed=54 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-03-22 22:34:12.267986 | orchestrator | 2025-03-22 22:34:12.268014 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-22 22:34:12.274114 | orchestrator | + deactivate 2025-03-22 22:34:12.274157 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-22 22:34:12.274177 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-22 22:34:12.274240 | orchestrator | + export PATH 2025-03-22 22:34:12.274257 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-22 22:34:12.274272 | orchestrator | + '[' -n '' ']' 2025-03-22 22:34:12.274288 | orchestrator | + hash -r 2025-03-22 22:34:12.274304 | orchestrator | + '[' -n '' ']' 2025-03-22 22:34:12.274319 | orchestrator | + unset VIRTUAL_ENV 2025-03-22 22:34:12.274334 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-22 22:34:12.274349 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-22 22:34:12.274365 | orchestrator | + unset -f deactivate 2025-03-22 22:34:12.274381 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-03-22 22:34:12.274407 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-22 22:34:12.275063 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-22 22:34:12.275088 | orchestrator | + local max_attempts=60 2025-03-22 22:34:12.275106 | orchestrator | + local name=ceph-ansible 2025-03-22 22:34:12.275123 | orchestrator | + local attempt_num=1 2025-03-22 22:34:12.275144 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-22 22:34:12.312240 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:34:12.312538 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-22 22:34:12.312567 | orchestrator | + local max_attempts=60 2025-03-22 22:34:12.312582 | orchestrator | + local name=kolla-ansible 2025-03-22 22:34:12.312596 | orchestrator | + local attempt_num=1 2025-03-22 22:34:12.312616 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-22 22:34:12.344747 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:34:12.345207 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-22 22:34:12.345235 | orchestrator | + local max_attempts=60 2025-03-22 22:34:12.345248 | orchestrator | + local name=osism-ansible 2025-03-22 22:34:12.345261 | orchestrator | + local attempt_num=1 2025-03-22 22:34:12.345279 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-22 22:34:12.373993 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:34:13.586351 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-22 22:34:13.586458 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-22 22:34:13.586490 | orchestrator | ++ semver latest 9.0.0 2025-03-22 22:34:13.633628 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-22 22:34:13.634381 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-22 22:34:13.634403 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-03-22 22:34:13.634418 | orchestrator | + local max_attempts=60 2025-03-22 22:34:13.634432 | orchestrator | + local name=netbox-netbox-1 2025-03-22 22:34:13.634445 | orchestrator | + local attempt_num=1 2025-03-22 22:34:13.634463 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-03-22 22:34:13.663934 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:34:13.669226 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-03-22 22:34:13.669254 | orchestrator | + set -e 2025-03-22 22:34:15.579239 | orchestrator | + osism manage netbox --parallel 4 2025-03-22 22:34:15.579376 | orchestrator | 2025-03-22 22:34:15 | INFO  | It takes a moment until task 173d4072-7f42-4b1c-a015-4a06b10ec000 (netbox-manager) has been started and output is visible here. 2025-03-22 22:34:17.869575 | orchestrator | 2025-03-22 22:34:17 | INFO  | Wait for NetBox service 2025-03-22 22:34:19.787384 | orchestrator | 2025-03-22 22:34:19.787498 | orchestrator | PLAY [Wait for NetBox service] ************************************************* 2025-03-22 22:34:19.871863 | orchestrator | 2025-03-22 22:34:19.872800 | orchestrator | TASK [Wait for NetBox service REST API] **************************************** 2025-03-22 22:34:21.162859 | orchestrator | ok: [localhost] 2025-03-22 22:34:21.163685 | orchestrator | 2025-03-22 22:34:21.163738 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:34:21.163930 | orchestrator | 2025-03-22 22:34:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:34:21.164162 | orchestrator | 2025-03-22 22:34:21 | INFO  | Please wait and do not abort execution. 2025-03-22 22:34:21.164217 | orchestrator | localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:34:22.144337 | orchestrator | 2025-03-22 22:34:22 | INFO  | Manage devicetypes 2025-03-22 22:34:33.949939 | orchestrator | 2025-03-22 22:34:33 | INFO  | Manage moduletypes 2025-03-22 22:34:34.220980 | orchestrator | 2025-03-22 22:34:34 | INFO  | Manage resources 2025-03-22 22:34:34.229450 | orchestrator | 2025-03-22 22:34:34 | INFO  | Handle file /netbox/resources/000-base.yml 2025-03-22 22:34:35.090479 | orchestrator | IGNORE_SSL_ERRORS is True, catching exception and disabling SSL verification. 2025-03-22 22:34:35.091308 | orchestrator | Manufacturer queued for addition: Arista 2025-03-22 22:34:35.091729 | orchestrator | Manufacturer queued for addition: Other 2025-03-22 22:34:35.092423 | orchestrator | Manufacturer Created: Arista - 2 2025-03-22 22:34:35.092692 | orchestrator | Manufacturer Created: Other - 3 2025-03-22 22:34:35.094103 | orchestrator | Device Type Created: Arista - DCS-7050TX3-48C8 - 2 2025-03-22 22:34:35.094642 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 2 - 1 2025-03-22 22:34:35.094673 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 2 - 2 2025-03-22 22:34:35.094914 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 2 - 3 2025-03-22 22:34:35.095859 | orchestrator | Interface Template Created: Ethernet4 - 10GBASE-T (10GE) - 2 - 4 2025-03-22 22:34:35.096302 | orchestrator | Interface Template Created: Ethernet5 - 10GBASE-T (10GE) - 2 - 5 2025-03-22 22:34:35.097168 | orchestrator | Interface Template Created: Ethernet6 - 10GBASE-T (10GE) - 2 - 6 2025-03-22 22:34:35.097798 | orchestrator | Interface Template Created: Ethernet7 - 10GBASE-T (10GE) - 2 - 7 2025-03-22 22:34:35.098711 | orchestrator | Interface Template Created: Ethernet8 - 10GBASE-T (10GE) - 2 - 8 2025-03-22 22:34:35.099452 | orchestrator | Interface Template Created: Ethernet9 - 10GBASE-T (10GE) - 2 - 9 2025-03-22 22:34:35.099848 | orchestrator | Interface Template Created: Ethernet10 - 10GBASE-T (10GE) - 2 - 10 2025-03-22 22:34:35.100633 | orchestrator | Interface Template Created: Ethernet11 - 10GBASE-T (10GE) - 2 - 11 2025-03-22 22:34:35.101362 | orchestrator | Interface Template Created: Ethernet12 - 10GBASE-T (10GE) - 2 - 12 2025-03-22 22:34:35.101393 | orchestrator | Interface Template Created: Ethernet13 - 10GBASE-T (10GE) - 2 - 13 2025-03-22 22:34:35.101658 | orchestrator | Interface Template Created: Ethernet14 - 10GBASE-T (10GE) - 2 - 14 2025-03-22 22:34:35.102317 | orchestrator | Interface Template Created: Ethernet15 - 10GBASE-T (10GE) - 2 - 15 2025-03-22 22:34:35.102777 | orchestrator | Interface Template Created: Ethernet16 - 10GBASE-T (10GE) - 2 - 16 2025-03-22 22:34:35.103592 | orchestrator | Interface Template Created: Ethernet17 - 10GBASE-T (10GE) - 2 - 17 2025-03-22 22:34:35.104277 | orchestrator | Interface Template Created: Ethernet18 - 10GBASE-T (10GE) - 2 - 18 2025-03-22 22:34:35.104776 | orchestrator | Interface Template Created: Ethernet19 - 10GBASE-T (10GE) - 2 - 19 2025-03-22 22:34:35.105063 | orchestrator | Interface Template Created: Ethernet20 - 10GBASE-T (10GE) - 2 - 20 2025-03-22 22:34:35.105547 | orchestrator | Interface Template Created: Ethernet21 - 10GBASE-T (10GE) - 2 - 21 2025-03-22 22:34:35.106110 | orchestrator | Interface Template Created: Ethernet22 - 10GBASE-T (10GE) - 2 - 22 2025-03-22 22:34:35.106735 | orchestrator | Interface Template Created: Ethernet23 - 10GBASE-T (10GE) - 2 - 23 2025-03-22 22:34:35.107328 | orchestrator | Interface Template Created: Ethernet24 - 10GBASE-T (10GE) - 2 - 24 2025-03-22 22:34:35.107597 | orchestrator | Interface Template Created: Ethernet25 - 10GBASE-T (10GE) - 2 - 25 2025-03-22 22:34:35.108505 | orchestrator | Interface Template Created: Ethernet26 - 10GBASE-T (10GE) - 2 - 26 2025-03-22 22:34:35.109376 | orchestrator | Interface Template Created: Ethernet27 - 10GBASE-T (10GE) - 2 - 27 2025-03-22 22:34:35.109406 | orchestrator | Interface Template Created: Ethernet28 - 10GBASE-T (10GE) - 2 - 28 2025-03-22 22:34:35.110288 | orchestrator | Interface Template Created: Ethernet29 - 10GBASE-T (10GE) - 2 - 29 2025-03-22 22:34:35.110786 | orchestrator | Interface Template Created: Ethernet30 - 10GBASE-T (10GE) - 2 - 30 2025-03-22 22:34:35.111321 | orchestrator | Interface Template Created: Ethernet31 - 10GBASE-T (10GE) - 2 - 31 2025-03-22 22:34:35.111926 | orchestrator | Interface Template Created: Ethernet32 - 10GBASE-T (10GE) - 2 - 32 2025-03-22 22:34:35.112650 | orchestrator | Interface Template Created: Ethernet33 - 10GBASE-T (10GE) - 2 - 33 2025-03-22 22:34:35.113730 | orchestrator | Interface Template Created: Ethernet34 - 10GBASE-T (10GE) - 2 - 34 2025-03-22 22:34:35.114795 | orchestrator | Interface Template Created: Ethernet35 - 10GBASE-T (10GE) - 2 - 35 2025-03-22 22:34:35.115181 | orchestrator | Interface Template Created: Ethernet36 - 10GBASE-T (10GE) - 2 - 36 2025-03-22 22:34:35.115830 | orchestrator | Interface Template Created: Ethernet37 - 10GBASE-T (10GE) - 2 - 37 2025-03-22 22:34:35.116441 | orchestrator | Interface Template Created: Ethernet38 - 10GBASE-T (10GE) - 2 - 38 2025-03-22 22:34:35.117044 | orchestrator | Interface Template Created: Ethernet39 - 10GBASE-T (10GE) - 2 - 39 2025-03-22 22:34:35.117394 | orchestrator | Interface Template Created: Ethernet40 - 10GBASE-T (10GE) - 2 - 40 2025-03-22 22:34:35.118351 | orchestrator | Interface Template Created: Ethernet41 - 10GBASE-T (10GE) - 2 - 41 2025-03-22 22:34:35.119234 | orchestrator | Interface Template Created: Ethernet42 - 10GBASE-T (10GE) - 2 - 42 2025-03-22 22:34:35.119561 | orchestrator | Interface Template Created: Ethernet43 - 10GBASE-T (10GE) - 2 - 43 2025-03-22 22:34:35.119831 | orchestrator | Interface Template Created: Ethernet44 - 10GBASE-T (10GE) - 2 - 44 2025-03-22 22:34:35.120583 | orchestrator | Interface Template Created: Ethernet45 - 10GBASE-T (10GE) - 2 - 45 2025-03-22 22:34:35.121121 | orchestrator | Interface Template Created: Ethernet46 - 10GBASE-T (10GE) - 2 - 46 2025-03-22 22:34:35.121626 | orchestrator | Interface Template Created: Ethernet47 - 10GBASE-T (10GE) - 2 - 47 2025-03-22 22:34:35.121863 | orchestrator | Interface Template Created: Ethernet48 - 10GBASE-T (10GE) - 2 - 48 2025-03-22 22:34:35.122424 | orchestrator | Interface Template Created: Ethernet49/1 - QSFP28 (100GE) - 2 - 49 2025-03-22 22:34:35.123032 | orchestrator | Interface Template Created: Ethernet50/1 - QSFP28 (100GE) - 2 - 50 2025-03-22 22:34:35.123637 | orchestrator | Interface Template Created: Ethernet51/1 - QSFP28 (100GE) - 2 - 51 2025-03-22 22:34:35.124012 | orchestrator | Interface Template Created: Ethernet52/1 - QSFP28 (100GE) - 2 - 52 2025-03-22 22:34:35.124886 | orchestrator | Interface Template Created: Ethernet53/1 - QSFP28 (100GE) - 2 - 53 2025-03-22 22:34:35.125510 | orchestrator | Interface Template Created: Ethernet54/1 - QSFP28 (100GE) - 2 - 54 2025-03-22 22:34:35.125540 | orchestrator | Interface Template Created: Ethernet55/1 - QSFP28 (100GE) - 2 - 55 2025-03-22 22:34:35.125988 | orchestrator | Interface Template Created: Ethernet56/1 - QSFP28 (100GE) - 2 - 56 2025-03-22 22:34:35.126612 | orchestrator | Interface Template Created: Management1 - 1000BASE-T (1GE) - 2 - 57 2025-03-22 22:34:35.127058 | orchestrator | Power Port Template Created: PS1 - C14 - 2 - 1 2025-03-22 22:34:35.127374 | orchestrator | Power Port Template Created: PS2 - C14 - 2 - 2 2025-03-22 22:34:35.127773 | orchestrator | Console Port Template Created: Console - RJ-45 - 2 - 1 2025-03-22 22:34:35.128165 | orchestrator | Device Type Created: Other - Baremetal-Device - 3 2025-03-22 22:34:35.128224 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 3 - 58 2025-03-22 22:34:35.128800 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 3 - 59 2025-03-22 22:34:35.130005 | orchestrator | Power Port Template Created: PS1 - C14 - 3 - 3 2025-03-22 22:34:35.130743 | orchestrator | Device Type Created: Other - Manager - 4 2025-03-22 22:34:35.131143 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 4 - 60 2025-03-22 22:34:35.131408 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 4 - 61 2025-03-22 22:34:35.131694 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 4 - 62 2025-03-22 22:34:35.132150 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 4 - 63 2025-03-22 22:34:35.132393 | orchestrator | Power Port Template Created: PS1 - C14 - 4 - 4 2025-03-22 22:34:35.132629 | orchestrator | Device Type Created: Other - Node - 5 2025-03-22 22:34:35.132946 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 5 - 64 2025-03-22 22:34:35.134150 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 5 - 65 2025-03-22 22:34:35.134912 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 5 - 66 2025-03-22 22:34:35.135321 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 5 - 67 2025-03-22 22:34:35.135769 | orchestrator | Power Port Template Created: PS1 - C14 - 5 - 5 2025-03-22 22:34:35.136267 | orchestrator | Device Type Created: Other - Baremetal-Housing - 6 2025-03-22 22:34:35.136569 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 6 - 68 2025-03-22 22:34:35.137102 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 6 - 69 2025-03-22 22:34:35.137435 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 6 - 70 2025-03-22 22:34:35.137758 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 6 - 71 2025-03-22 22:34:35.138633 | orchestrator | Power Port Template Created: PS1 - C14 - 6 - 6 2025-03-22 22:34:35.139131 | orchestrator | Manufacturer queued for addition: .gitkeep 2025-03-22 22:34:35.140910 | orchestrator | Manufacturer Created: .gitkeep - 4 2025-03-22 22:34:35.141283 | orchestrator | 2025-03-22 22:34:35.142802 | orchestrator | PLAY [Manage NetBox resources defined in 000-base.yml] ************************* 2025-03-22 22:34:35.144965 | orchestrator | 2025-03-22 22:34:35.145400 | orchestrator | TASK [Manage NetBox resource Management of type ipam_role] ********************* 2025-03-22 22:34:36.403470 | orchestrator | ok: [localhost] 2025-03-22 22:34:36.406863 | orchestrator | 2025-03-22 22:34:36.407423 | orchestrator | TASK [Manage NetBox resource External of type ipam_role] *********************** 2025-03-22 22:34:37.292375 | orchestrator | ok: [localhost] 2025-03-22 22:34:37.292607 | orchestrator | 2025-03-22 22:34:37.292639 | orchestrator | TASK [Manage NetBox resource Api of type device_role] ************************** 2025-03-22 22:34:38.825616 | orchestrator | changed: [localhost] 2025-03-22 22:34:38.828763 | orchestrator | 2025-03-22 22:34:38.830284 | orchestrator | TASK [Manage NetBox resource Leaf of type device_role] ************************* 2025-03-22 22:34:39.782410 | orchestrator | ok: [localhost] 2025-03-22 22:34:39.786456 | orchestrator | 2025-03-22 22:34:39.786683 | orchestrator | TASK [Manage NetBox resource Spine of type device_role] ************************ 2025-03-22 22:34:40.742774 | orchestrator | ok: [localhost] 2025-03-22 22:34:40.749436 | orchestrator | 2025-03-22 22:34:40.750080 | orchestrator | TASK [Manage NetBox resource Oob of type device_role] ************************** 2025-03-22 22:34:41.838913 | orchestrator | changed: [localhost] 2025-03-22 22:34:41.843710 | orchestrator | 2025-03-22 22:34:41.844884 | orchestrator | TASK [Manage NetBox resource Storage of type device_role] ********************** 2025-03-22 22:34:42.779871 | orchestrator | ok: [localhost] 2025-03-22 22:34:42.780690 | orchestrator | 2025-03-22 22:34:42.780889 | orchestrator | TASK [Manage NetBox resource Compute of type device_role] ********************** 2025-03-22 22:34:43.772148 | orchestrator | ok: [localhost] 2025-03-22 22:34:43.774751 | orchestrator | 2025-03-22 22:34:43.775326 | orchestrator | TASK [Manage NetBox resource Manager of type device_role] ********************** 2025-03-22 22:34:44.844515 | orchestrator | ok: [localhost] 2025-03-22 22:34:44.845917 | orchestrator | 2025-03-22 22:34:44.846922 | orchestrator | TASK [Manage NetBox resource Ironic of type device_role] *********************** 2025-03-22 22:34:45.850902 | orchestrator | changed: [localhost] 2025-03-22 22:34:45.851798 | orchestrator | 2025-03-22 22:34:45.852346 | orchestrator | TASK [Manage NetBox resource Control of type device_role] ********************** 2025-03-22 22:34:46.881739 | orchestrator | ok: [localhost] 2025-03-22 22:34:46.883682 | orchestrator | 2025-03-22 22:34:46.887297 | orchestrator | TASK [Manage NetBox resource Network of type device_role] ********************** 2025-03-22 22:34:47.818757 | orchestrator | ok: [localhost] 2025-03-22 22:34:47.819144 | orchestrator | 2025-03-22 22:34:47.819653 | orchestrator | TASK [Manage NetBox resource Router of type device_role] *********************** 2025-03-22 22:34:48.757351 | orchestrator | ok: [localhost] 2025-03-22 22:34:48.761623 | orchestrator | 2025-03-22 22:34:48.762683 | orchestrator | TASK [Manage NetBox resource Firewall of type device_role] ********************* 2025-03-22 22:34:49.769632 | orchestrator | ok: [localhost] 2025-03-22 22:34:49.771469 | orchestrator | 2025-03-22 22:34:49.771779 | orchestrator | TASK [Manage NetBox resource Dummy of type device_role] ************************ 2025-03-22 22:34:50.818881 | orchestrator | changed: [localhost] 2025-03-22 22:34:51.983410 | orchestrator | 2025-03-22 22:34:51.983521 | orchestrator | TASK [Manage NetBox resource Sample of type device_role] *********************** 2025-03-22 22:34:51.983553 | orchestrator | changed: [localhost] 2025-03-22 22:34:51.987438 | orchestrator | 2025-03-22 22:34:51.987806 | orchestrator | TASK [Manage NetBox resource Housing of type device_role] ********************** 2025-03-22 22:34:53.013915 | orchestrator | ok: [localhost] 2025-03-22 22:34:53.016161 | orchestrator | 2025-03-22 22:34:53.016505 | orchestrator | TASK [Manage NetBox resource DPU of type device_role] ************************** 2025-03-22 22:34:54.060307 | orchestrator | changed: [localhost] 2025-03-22 22:34:54.061738 | orchestrator | 2025-03-22 22:34:54.062446 | orchestrator | TASK [Manage NetBox resource managed-by-osism of type tag] ********************* 2025-03-22 22:34:55.410131 | orchestrator | changed: [localhost] 2025-03-22 22:34:55.412999 | orchestrator | 2025-03-22 22:34:55.415259 | orchestrator | TASK [Manage NetBox resource managed-by-ironic of type tag] ******************** 2025-03-22 22:34:56.527527 | orchestrator | changed: [localhost] 2025-03-22 22:34:56.527999 | orchestrator | 2025-03-22 22:34:56.528036 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:34:56.528250 | orchestrator | 2025-03-22 22:34:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:34:56.528279 | orchestrator | 2025-03-22 22:34:56 | INFO  | Please wait and do not abort execution. 2025-03-22 22:34:56.528624 | orchestrator | localhost : ok=20 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:34:56.780062 | orchestrator | 2025-03-22 22:34:56 | INFO  | Handle file /netbox/resources/100-initialise.yml 2025-03-22 22:34:57.718304 | orchestrator | 2025-03-22 22:34:57.764929 | orchestrator | PLAY [Manage NetBox resources defined in 100-initialise.yml] ******************* 2025-03-22 22:34:57.764990 | orchestrator | 2025-03-22 22:34:57.766294 | orchestrator | TASK [Manage NetBox resource Discworld of type site] *************************** 2025-03-22 22:34:59.090574 | orchestrator | changed: [localhost] 2025-03-22 22:34:59.092869 | orchestrator | 2025-03-22 22:34:59.094111 | orchestrator | TASK [Manage NetBox resource Ankh-Morpork of type location] ******************** 2025-03-22 22:35:00.576821 | orchestrator | changed: [localhost] 2025-03-22 22:35:00.578007 | orchestrator | 2025-03-22 22:35:01.970750 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-03-22 22:35:01.970882 | orchestrator | changed: [localhost] 2025-03-22 22:35:01.971359 | orchestrator | 2025-03-22 22:35:01.971598 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-03-22 22:35:03.026540 | orchestrator | changed: [localhost] 2025-03-22 22:35:03.028185 | orchestrator | 2025-03-22 22:35:03.028926 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-03-22 22:35:04.163664 | orchestrator | changed: [localhost] 2025-03-22 22:35:04.164781 | orchestrator | 2025-03-22 22:35:04.165560 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:35:05.543569 | orchestrator | changed: [localhost] 2025-03-22 22:35:05.543898 | orchestrator | 2025-03-22 22:35:05.544341 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:35:06.571529 | orchestrator | changed: [localhost] 2025-03-22 22:35:06.571843 | orchestrator | 2025-03-22 22:35:06.572420 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:35:06.572605 | orchestrator | 2025-03-22 22:35:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:35:06.572747 | orchestrator | 2025-03-22 22:35:06 | INFO  | Please wait and do not abort execution. 2025-03-22 22:35:06.573527 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:35:06.841426 | orchestrator | 2025-03-22 22:35:06 | INFO  | Handle file /netbox/resources/200-rack-1000.yml 2025-03-22 22:35:07.708469 | orchestrator | 2025-03-22 22:35:07.708785 | orchestrator | PLAY [Manage NetBox resources defined in 200-rack-1000.yml] ******************** 2025-03-22 22:35:07.752511 | orchestrator | 2025-03-22 22:35:07.752888 | orchestrator | TASK [Manage NetBox resource 1000 of type rack] ******************************** 2025-03-22 22:35:09.343307 | orchestrator | changed: [localhost] 2025-03-22 22:35:09.344090 | orchestrator | 2025-03-22 22:35:09.344779 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-03-22 22:35:16.637770 | orchestrator | changed: [localhost] 2025-03-22 22:35:23.838586 | orchestrator | 2025-03-22 22:35:23.838711 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-03-22 22:35:23.838747 | orchestrator | changed: [localhost] 2025-03-22 22:35:23.839691 | orchestrator | 2025-03-22 22:35:23.839952 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-03-22 22:35:30.717516 | orchestrator | changed: [localhost] 2025-03-22 22:35:30.719809 | orchestrator | 2025-03-22 22:35:37.821549 | orchestrator | TASK [Manage NetBox resource testbed-switch-oob of type device] **************** 2025-03-22 22:35:37.821693 | orchestrator | changed: [localhost] 2025-03-22 22:35:37.823565 | orchestrator | 2025-03-22 22:35:37.826221 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-03-22 22:35:40.940084 | orchestrator | changed: [localhost] 2025-03-22 22:35:40.941309 | orchestrator | 2025-03-22 22:35:40.941366 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-03-22 22:35:43.668584 | orchestrator | changed: [localhost] 2025-03-22 22:35:43.669561 | orchestrator | 2025-03-22 22:35:46.394508 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-03-22 22:35:46.394605 | orchestrator | changed: [localhost] 2025-03-22 22:35:46.397357 | orchestrator | 2025-03-22 22:35:49.052891 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-03-22 22:35:49.053712 | orchestrator | changed: [localhost] 2025-03-22 22:35:51.800541 | orchestrator | 2025-03-22 22:35:51.800656 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-03-22 22:35:51.800692 | orchestrator | changed: [localhost] 2025-03-22 22:35:51.804602 | orchestrator | 2025-03-22 22:35:51.807987 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-03-22 22:35:54.277914 | orchestrator | changed: [localhost] 2025-03-22 22:35:54.278842 | orchestrator | 2025-03-22 22:35:57.148711 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-03-22 22:35:57.148847 | orchestrator | changed: [localhost] 2025-03-22 22:35:57.152530 | orchestrator | 2025-03-22 22:35:57.154176 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-03-22 22:36:00.296480 | orchestrator | changed: [localhost] 2025-03-22 22:36:00.301844 | orchestrator | 2025-03-22 22:36:00.303654 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-03-22 22:36:02.951547 | orchestrator | changed: [localhost] 2025-03-22 22:36:02.953666 | orchestrator | 2025-03-22 22:36:02.954720 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-03-22 22:36:05.473387 | orchestrator | changed: [localhost] 2025-03-22 22:36:05.475443 | orchestrator | 2025-03-22 22:36:05.475954 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-03-22 22:36:07.953581 | orchestrator | changed: [localhost] 2025-03-22 22:36:07.954180 | orchestrator | 2025-03-22 22:36:07.955624 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:07.956360 | orchestrator | 2025-03-22 22:36:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:07.956389 | orchestrator | 2025-03-22 22:36:07 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:07.956411 | orchestrator | localhost : ok=16 changed=16 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:08.222832 | orchestrator | 2025-03-22 22:36:08 | INFO  | Handle file /netbox/resources/300-testbed-switch-0.yml 2025-03-22 22:36:08.228624 | orchestrator | 2025-03-22 22:36:08 | INFO  | Handle file /netbox/resources/300-testbed-node-9.yml 2025-03-22 22:36:08.232147 | orchestrator | 2025-03-22 22:36:08 | INFO  | Handle file /netbox/resources/300-testbed-node-1.yml 2025-03-22 22:36:08.237717 | orchestrator | 2025-03-22 22:36:08 | INFO  | Handle file /netbox/resources/300-testbed-node-3.yml 2025-03-22 22:36:09.157753 | orchestrator | 2025-03-22 22:36:09.158079 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-0.yml] ************* 2025-03-22 22:36:09.241855 | orchestrator | 2025-03-22 22:36:09.243190 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:09.315656 | orchestrator | 2025-03-22 22:36:09.316085 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-9.yml] *************** 2025-03-22 22:36:09.340083 | orchestrator | 2025-03-22 22:36:09.341364 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-3.yml] *************** 2025-03-22 22:36:09.377716 | orchestrator | 2025-03-22 22:36:09.379306 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:09.384499 | orchestrator | 2025-03-22 22:36:09.385739 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-1.yml] *************** 2025-03-22 22:36:09.395068 | orchestrator | 2025-03-22 22:36:09.397861 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:09.443014 | orchestrator | 2025-03-22 22:36:09.443620 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:12.484294 | orchestrator | changed: [localhost] 2025-03-22 22:36:12.492272 | orchestrator | 2025-03-22 22:36:12.727165 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:12.727297 | orchestrator | changed: [localhost] 2025-03-22 22:36:12.733098 | orchestrator | 2025-03-22 22:36:12.733538 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:13.104915 | orchestrator | changed: [localhost] 2025-03-22 22:36:13.118835 | orchestrator | 2025-03-22 22:36:13.120753 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:13.303136 | orchestrator | changed: [localhost] 2025-03-22 22:36:13.309855 | orchestrator | 2025-03-22 22:36:13.312060 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:15.529642 | orchestrator | changed: [localhost] 2025-03-22 22:36:15.540407 | orchestrator | 2025-03-22 22:36:15.543137 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:15.545596 | orchestrator | localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:15.573965 | orchestrator | 2025-03-22 22:36:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:15.574004 | orchestrator | 2025-03-22 22:36:15 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:15.574072 | orchestrator | changed: [localhost] 2025-03-22 22:36:15.574670 | orchestrator | 2025-03-22 22:36:15.574703 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:15.871811 | orchestrator | 2025-03-22 22:36:15 | INFO  | Handle file /netbox/resources/300-testbed-node-6.yml 2025-03-22 22:36:16.365854 | orchestrator | changed: [localhost] 2025-03-22 22:36:16.370717 | orchestrator | 2025-03-22 22:36:16.374842 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:16.963616 | orchestrator | 2025-03-22 22:36:17.029559 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-6.yml] *************** 2025-03-22 22:36:17.029673 | orchestrator | 2025-03-22 22:36:17.035045 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:18.712734 | orchestrator | changed: [localhost] 2025-03-22 22:36:18.716674 | orchestrator | 2025-03-22 22:36:18.718168 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:19.162526 | orchestrator | changed: [localhost] 2025-03-22 22:36:19.163354 | orchestrator | 2025-03-22 22:36:19.164152 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:20.690626 | orchestrator | changed: [localhost] 2025-03-22 22:36:20.691921 | orchestrator | 2025-03-22 22:36:20.694101 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:21.782433 | orchestrator | changed: [localhost] 2025-03-22 22:36:21.787216 | orchestrator | 2025-03-22 22:36:21.788429 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:22.158461 | orchestrator | changed: [localhost] 2025-03-22 22:36:22.160490 | orchestrator | 2025-03-22 22:36:22.161834 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:23.909434 | orchestrator | changed: [localhost] 2025-03-22 22:36:23.913297 | orchestrator | 2025-03-22 22:36:23.913653 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:24.511284 | orchestrator | changed: [localhost] 2025-03-22 22:36:24.520264 | orchestrator | 2025-03-22 22:36:24.521384 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:24.678970 | orchestrator | changed: [localhost] 2025-03-22 22:36:24.679923 | orchestrator | 2025-03-22 22:36:24.680534 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:24.924171 | orchestrator | changed: [localhost] 2025-03-22 22:36:24.933643 | orchestrator | 2025-03-22 22:36:24.934477 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:25.663967 | orchestrator | changed: [localhost] 2025-03-22 22:36:25.668380 | orchestrator | 2025-03-22 22:36:25.668888 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-03-22 22:36:26.659993 | orchestrator | changed: [localhost] 2025-03-22 22:36:26.669249 | orchestrator | 2025-03-22 22:36:26.670481 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-03-22 22:36:27.185947 | orchestrator | changed: [localhost] 2025-03-22 22:36:27.194416 | orchestrator | 2025-03-22 22:36:27.194560 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:27.288364 | orchestrator | changed: [localhost] 2025-03-22 22:36:27.297825 | orchestrator | 2025-03-22 22:36:27.299565 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:27.794642 | orchestrator | changed: [localhost] 2025-03-22 22:36:27.800645 | orchestrator | 2025-03-22 22:36:27.801623 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:27.802347 | orchestrator | 2025-03-22 22:36:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:27.803683 | orchestrator | 2025-03-22 22:36:27 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:27.807746 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:28.145417 | orchestrator | 2025-03-22 22:36:28 | INFO  | Handle file /netbox/resources/300-testbed-switch-2.yml 2025-03-22 22:36:29.051067 | orchestrator | 2025-03-22 22:36:29.053002 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-2.yml] ************* 2025-03-22 22:36:29.160060 | orchestrator | 2025-03-22 22:36:29.457133 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:29.457302 | orchestrator | changed: [localhost] 2025-03-22 22:36:29.459836 | orchestrator | 2025-03-22 22:36:29.461470 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:29.751648 | orchestrator | 2025-03-22 22:36:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:29.751755 | orchestrator | 2025-03-22 22:36:29 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:29.751772 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:29.751803 | orchestrator | 2025-03-22 22:36:29 | INFO  | Handle file /netbox/resources/300-testbed-node-5.yml 2025-03-22 22:36:29.782819 | orchestrator | changed: [localhost] 2025-03-22 22:36:29.785107 | orchestrator | 2025-03-22 22:36:29.786249 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:29.904096 | orchestrator | changed: [localhost] 2025-03-22 22:36:29.910604 | orchestrator | 2025-03-22 22:36:30.604559 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:30.604671 | orchestrator | 2025-03-22 22:36:30.606256 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-5.yml] *************** 2025-03-22 22:36:30.651069 | orchestrator | 2025-03-22 22:36:30.651280 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:31.713539 | orchestrator | changed: [localhost] 2025-03-22 22:36:31.723343 | orchestrator | 2025-03-22 22:36:31.725643 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:31.882596 | orchestrator | changed: [localhost] 2025-03-22 22:36:31.890183 | orchestrator | 2025-03-22 22:36:32.383426 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:32.384323 | orchestrator | changed: [localhost] 2025-03-22 22:36:32.387619 | orchestrator | 2025-03-22 22:36:32.391093 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:33.374466 | orchestrator | changed: [localhost] 2025-03-22 22:36:33.376567 | orchestrator | 2025-03-22 22:36:33.377616 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-03-22 22:36:33.426690 | orchestrator | changed: [localhost] 2025-03-22 22:36:33.436842 | orchestrator | 2025-03-22 22:36:34.192282 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:34.192422 | orchestrator | changed: [localhost] 2025-03-22 22:36:34.192862 | orchestrator | 2025-03-22 22:36:34.195240 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-03-22 22:36:34.454435 | orchestrator | changed: [localhost] 2025-03-22 22:36:34.475048 | orchestrator | 2025-03-22 22:36:34.475328 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:35.710769 | orchestrator | changed: [localhost] 2025-03-22 22:36:35.718645 | orchestrator | 2025-03-22 22:36:35.719460 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:35.719778 | orchestrator | 2025-03-22 22:36:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:35.720305 | orchestrator | 2025-03-22 22:36:35 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:35.722895 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:35.969994 | orchestrator | changed: [localhost] 2025-03-22 22:36:35.979397 | orchestrator | 2025-03-22 22:36:36.031797 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:36.031892 | orchestrator | 2025-03-22 22:36:36 | INFO  | Handle file /netbox/resources/300-testbed-node-8.yml 2025-03-22 22:36:36.416042 | orchestrator | changed: [localhost] 2025-03-22 22:36:36.420883 | orchestrator | 2025-03-22 22:36:36.422116 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:36.422156 | orchestrator | 2025-03-22 22:36:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:36.422327 | orchestrator | 2025-03-22 22:36:36 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:36.422356 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:36.700172 | orchestrator | 2025-03-22 22:36:36 | INFO  | Handle file /netbox/resources/300-testbed-node-0.yml 2025-03-22 22:36:36.907594 | orchestrator | changed: [localhost] 2025-03-22 22:36:36.910397 | orchestrator | 2025-03-22 22:36:36.910614 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:36.910647 | orchestrator | 2025-03-22 22:36:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:36.910712 | orchestrator | 2025-03-22 22:36:36 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:36.911351 | orchestrator | localhost : ok=3 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:36.956509 | orchestrator | 2025-03-22 22:36:36.956632 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-8.yml] *************** 2025-03-22 22:36:37.007828 | orchestrator | 2025-03-22 22:36:37.008271 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:37.228476 | orchestrator | 2025-03-22 22:36:37 | INFO  | Handle file /netbox/resources/300-testbed-manager.yml 2025-03-22 22:36:37.610964 | orchestrator | 2025-03-22 22:36:37.612773 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-0.yml] *************** 2025-03-22 22:36:37.667912 | orchestrator | 2025-03-22 22:36:37.668807 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:38.117272 | orchestrator | 2025-03-22 22:36:38.188973 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-manager.yml] ************** 2025-03-22 22:36:38.189032 | orchestrator | 2025-03-22 22:36:38.871052 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:38.871190 | orchestrator | changed: [localhost] 2025-03-22 22:36:38.872603 | orchestrator | 2025-03-22 22:36:38.874327 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:40.467040 | orchestrator | changed: [localhost] 2025-03-22 22:36:40.468326 | orchestrator | 2025-03-22 22:36:40.469595 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:40.540702 | orchestrator | changed: [localhost] 2025-03-22 22:36:40.542858 | orchestrator | 2025-03-22 22:36:40.544236 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:41.199589 | orchestrator | changed: [localhost] 2025-03-22 22:36:41.205875 | orchestrator | 2025-03-22 22:36:41.207525 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:41.612335 | orchestrator | changed: [localhost] 2025-03-22 22:36:41.617042 | orchestrator | 2025-03-22 22:36:41.617908 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:43.240306 | orchestrator | changed: [localhost] 2025-03-22 22:36:43.242469 | orchestrator | 2025-03-22 22:36:43.244164 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:43.308596 | orchestrator | changed: [localhost] 2025-03-22 22:36:43.316370 | orchestrator | 2025-03-22 22:36:43.317378 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:43.858306 | orchestrator | changed: [localhost] 2025-03-22 22:36:43.859584 | orchestrator | 2025-03-22 22:36:43.861054 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:44.127531 | orchestrator | changed: [localhost] 2025-03-22 22:36:44.133048 | orchestrator | 2025-03-22 22:36:44.136735 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:45.818320 | orchestrator | changed: [localhost] 2025-03-22 22:36:45.830447 | orchestrator | 2025-03-22 22:36:45.846854 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-03-22 22:36:45.846892 | orchestrator | changed: [localhost] 2025-03-22 22:36:45.860033 | orchestrator | 2025-03-22 22:36:45.860336 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:46.426567 | orchestrator | changed: [localhost] 2025-03-22 22:36:46.429511 | orchestrator | 2025-03-22 22:36:46.429834 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:46.540481 | orchestrator | changed: [localhost] 2025-03-22 22:36:46.541114 | orchestrator | 2025-03-22 22:36:46.543322 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:48.235120 | orchestrator | changed: [localhost] 2025-03-22 22:36:48.240546 | orchestrator | 2025-03-22 22:36:48.241498 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:48.243321 | orchestrator | 2025-03-22 22:36:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:48.243366 | orchestrator | 2025-03-22 22:36:48 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:48.243389 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:48.408283 | orchestrator | changed: [localhost] 2025-03-22 22:36:48.410416 | orchestrator | 2025-03-22 22:36:48.533837 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:48.533958 | orchestrator | 2025-03-22 22:36:48 | INFO  | Handle file /netbox/resources/300-testbed-node-4.yml 2025-03-22 22:36:49.114816 | orchestrator | changed: [localhost] 2025-03-22 22:36:49.123745 | orchestrator | 2025-03-22 22:36:49.123994 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:49.485096 | orchestrator | 2025-03-22 22:36:49.541591 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-4.yml] *************** 2025-03-22 22:36:49.541673 | orchestrator | 2025-03-22 22:36:49.544566 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:50.181696 | orchestrator | changed: [localhost] 2025-03-22 22:36:50.190321 | orchestrator | 2025-03-22 22:36:50.191146 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:50.643380 | orchestrator | changed: [localhost] 2025-03-22 22:36:50.648376 | orchestrator | 2025-03-22 22:36:50.649093 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:51.472558 | orchestrator | changed: [localhost] 2025-03-22 22:36:51.479895 | orchestrator | 2025-03-22 22:36:52.682149 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:52.682307 | orchestrator | changed: [localhost] 2025-03-22 22:36:52.687927 | orchestrator | 2025-03-22 22:36:52.688737 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-03-22 22:36:52.826542 | orchestrator | changed: [localhost] 2025-03-22 22:36:52.829598 | orchestrator | 2025-03-22 22:36:52.830089 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:36:53.022512 | orchestrator | changed: [localhost] 2025-03-22 22:36:53.030274 | orchestrator | 2025-03-22 22:36:53.031919 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:53.388363 | orchestrator | changed: [localhost] 2025-03-22 22:36:53.402963 | orchestrator | 2025-03-22 22:36:53.403680 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-03-22 22:36:55.036651 | orchestrator | changed: [localhost] 2025-03-22 22:36:55.037328 | orchestrator | 2025-03-22 22:36:55.038078 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:55.038484 | orchestrator | 2025-03-22 22:36:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:55.040622 | orchestrator | 2025-03-22 22:36:55 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:55.041373 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:55.337609 | orchestrator | changed: [localhost] 2025-03-22 22:36:55.343100 | orchestrator | 2025-03-22 22:36:55 | INFO  | Handle file /netbox/resources/300-testbed-node-7.yml 2025-03-22 22:36:55.345190 | orchestrator | 2025-03-22 22:36:55.347104 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-03-22 22:36:55.628945 | orchestrator | changed: [localhost] 2025-03-22 22:36:55.631072 | orchestrator | 2025-03-22 22:36:55.631103 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:55.631118 | orchestrator | 2025-03-22 22:36:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:55.631132 | orchestrator | 2025-03-22 22:36:55 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:55.631151 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:55.910372 | orchestrator | 2025-03-22 22:36:55 | INFO  | Handle file /netbox/resources/300-testbed-node-2.yml 2025-03-22 22:36:56.296729 | orchestrator | 2025-03-22 22:36:56.297881 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-7.yml] *************** 2025-03-22 22:36:56.331909 | orchestrator | changed: [localhost] 2025-03-22 22:36:56.336054 | orchestrator | 2025-03-22 22:36:56.337432 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:56.343398 | orchestrator | 2025-03-22 22:36:56.344446 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:56.819507 | orchestrator | 2025-03-22 22:36:56.822578 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-2.yml] *************** 2025-03-22 22:36:56.877719 | orchestrator | 2025-03-22 22:36:56.878784 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:57.456936 | orchestrator | changed: [localhost] 2025-03-22 22:36:57.461178 | orchestrator | 2025-03-22 22:36:57.461547 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:36:57.461585 | orchestrator | 2025-03-22 22:36:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:36:57.463061 | orchestrator | 2025-03-22 22:36:57 | INFO  | Please wait and do not abort execution. 2025-03-22 22:36:57.463509 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:36:57.760840 | orchestrator | 2025-03-22 22:36:57 | INFO  | Handle file /netbox/resources/300-testbed-switch-1.yml 2025-03-22 22:36:58.607364 | orchestrator | 2025-03-22 22:36:58.609186 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-1.yml] ************* 2025-03-22 22:36:58.693368 | orchestrator | 2025-03-22 22:36:58.694489 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:58.791455 | orchestrator | changed: [localhost] 2025-03-22 22:36:58.792102 | orchestrator | 2025-03-22 22:36:59.037087 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:59.037189 | orchestrator | changed: [localhost] 2025-03-22 22:36:59.037744 | orchestrator | 2025-03-22 22:36:59.037778 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:36:59.619462 | orchestrator | changed: [localhost] 2025-03-22 22:36:59.620461 | orchestrator | 2025-03-22 22:36:59.620508 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:37:01.302563 | orchestrator | changed: [localhost] 2025-03-22 22:37:01.309402 | orchestrator | 2025-03-22 22:37:01.313830 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:01.412785 | orchestrator | changed: [localhost] 2025-03-22 22:37:01.416899 | orchestrator | 2025-03-22 22:37:01.417359 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:02.149473 | orchestrator | changed: [localhost] 2025-03-22 22:37:02.150936 | orchestrator | 2025-03-22 22:37:02.153233 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:37:02.580542 | orchestrator | changed: [localhost] 2025-03-22 22:37:02.582334 | orchestrator | 2025-03-22 22:37:03.256401 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:37:03.256526 | orchestrator | changed: [localhost] 2025-03-22 22:37:03.257322 | orchestrator | 2025-03-22 22:37:03.257983 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:03.488943 | orchestrator | changed: [localhost] 2025-03-22 22:37:03.494451 | orchestrator | 2025-03-22 22:37:03.495670 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:37:03.498132 | orchestrator | 2025-03-22 22:37:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:37:03.498166 | orchestrator | 2025-03-22 22:37:03 | INFO  | Please wait and do not abort execution. 2025-03-22 22:37:03.498187 | orchestrator | localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:37:04.746082 | orchestrator | changed: [localhost] 2025-03-22 22:37:04.747136 | orchestrator | 2025-03-22 22:37:04.747542 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:37:04.932512 | orchestrator | changed: [localhost] 2025-03-22 22:37:04.934074 | orchestrator | 2025-03-22 22:37:04.934764 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-03-22 22:37:05.613276 | orchestrator | changed: [localhost] 2025-03-22 22:37:05.619386 | orchestrator | 2025-03-22 22:37:05.623500 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-03-22 22:37:07.058346 | orchestrator | changed: [localhost] 2025-03-22 22:37:07.339495 | orchestrator | 2025-03-22 22:37:07.339591 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:37:07.339609 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:37:07.339624 | orchestrator | 2025-03-22 22:37:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:37:07.339638 | orchestrator | 2025-03-22 22:37:07 | INFO  | Please wait and do not abort execution. 2025-03-22 22:37:07.339665 | orchestrator | changed: [localhost] 2025-03-22 22:37:07.342514 | orchestrator | 2025-03-22 22:37:08.239287 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:08.239414 | orchestrator | changed: [localhost] 2025-03-22 22:37:09.699258 | orchestrator | 2025-03-22 22:37:09.699374 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:09.699407 | orchestrator | changed: [localhost] 2025-03-22 22:37:09.699611 | orchestrator | 2025-03-22 22:37:09.701846 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:10.071571 | orchestrator | changed: [localhost] 2025-03-22 22:37:10.076185 | orchestrator | 2025-03-22 22:37:10.076825 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-03-22 22:37:11.381588 | orchestrator | changed: [localhost] 2025-03-22 22:37:11.383551 | orchestrator | 2025-03-22 22:37:11.384236 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-03-22 22:37:11.680924 | orchestrator | changed: [localhost] 2025-03-22 22:37:11.685061 | orchestrator | 2025-03-22 22:37:11.685678 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-03-22 22:37:13.310671 | orchestrator | changed: [localhost] 2025-03-22 22:37:13.311958 | orchestrator | 2025-03-22 22:37:13.311997 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:37:13.312260 | orchestrator | 2025-03-22 22:37:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:37:13.313041 | orchestrator | 2025-03-22 22:37:13 | INFO  | Please wait and do not abort execution. 2025-03-22 22:37:13.314114 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:37:13.616345 | orchestrator | changed: [localhost] 2025-03-22 22:37:13.616475 | orchestrator | 2025-03-22 22:37:13.616846 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:37:13.617092 | orchestrator | 2025-03-22 22:37:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:37:13.617169 | orchestrator | 2025-03-22 22:37:13 | INFO  | Please wait and do not abort execution. 2025-03-22 22:37:13.617879 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:37:13.926791 | orchestrator | 2025-03-22 22:37:13 | INFO  | Runtime: 176.0614s 2025-03-22 22:37:14.444261 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-03-22 22:37:14.792505 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-22 22:37:14.798228 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798260 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798305 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" api 4 minutes ago Up 4 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-03-22 22:37:14.798321 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 4 minutes ago Up 4 minutes (healthy) 8000/tcp 2025-03-22 22:37:14.798347 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" beat 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798362 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" conductor 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798376 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" flower 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798390 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 4 minutes ago Up 3 minutes (healthy) 2025-03-22 22:37:14.798404 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" listener 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798418 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 4 minutes ago Up 4 minutes (healthy) 3306/tcp 2025-03-22 22:37:14.798432 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" netbox 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798445 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" openstack 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798463 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 4 minutes ago Up 4 minutes (healthy) 6379/tcp 2025-03-22 22:37:14.798478 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" watchdog 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798492 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798506 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798520 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/usr/bin/tini -- sl…" osismclient 4 minutes ago Up 4 minutes (healthy) 2025-03-22 22:37:14.798540 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-03-22 22:37:15.035437 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-22 22:37:15.042847 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" netbox 11 minutes ago Up 10 minutes (healthy) 2025-03-22 22:37:15.042883 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" netbox-worker 11 minutes ago Up 6 minutes (healthy) 2025-03-22 22:37:15.042898 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.8-alpine "docker-entrypoint.s…" postgres 11 minutes ago Up 11 minutes (healthy) 5432/tcp 2025-03-22 22:37:15.042936 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 11 minutes ago Up 11 minutes (healthy) 6379/tcp 2025-03-22 22:37:15.042958 | orchestrator | ++ semver latest 7.0.0 2025-03-22 22:37:15.086504 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-22 22:37:15.092809 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-22 22:37:15.092848 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-03-22 22:37:15.092872 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-03-22 22:37:16.944026 | orchestrator | 2025-03-22 22:37:16 | INFO  | Task 53e4fa75-bf29-4c52-a90f-13158e7be30f (resolvconf) was prepared for execution. 2025-03-22 22:37:20.737959 | orchestrator | 2025-03-22 22:37:16 | INFO  | It takes a moment until task 53e4fa75-bf29-4c52-a90f-13158e7be30f (resolvconf) has been started and output is visible here. 2025-03-22 22:37:20.738956 | orchestrator | 2025-03-22 22:37:20.739062 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-03-22 22:37:20.739089 | orchestrator | 2025-03-22 22:37:20.740250 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:37:20.740398 | orchestrator | Saturday 22 March 2025 22:37:20 +0000 (0:00:00.104) 0:00:00.104 ******** 2025-03-22 22:37:25.770156 | orchestrator | ok: [testbed-manager] 2025-03-22 22:37:25.770967 | orchestrator | 2025-03-22 22:37:25.771328 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-22 22:37:25.771754 | orchestrator | Saturday 22 March 2025 22:37:25 +0000 (0:00:05.034) 0:00:05.138 ******** 2025-03-22 22:37:25.835895 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:37:25.836668 | orchestrator | 2025-03-22 22:37:25.837225 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-22 22:37:25.837573 | orchestrator | Saturday 22 March 2025 22:37:25 +0000 (0:00:00.066) 0:00:05.205 ******** 2025-03-22 22:37:25.931638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-03-22 22:37:25.932270 | orchestrator | 2025-03-22 22:37:25.932646 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-22 22:37:25.932989 | orchestrator | Saturday 22 March 2025 22:37:25 +0000 (0:00:00.094) 0:00:05.299 ******** 2025-03-22 22:37:26.020277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-03-22 22:37:26.020793 | orchestrator | 2025-03-22 22:37:26.021014 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-22 22:37:26.021927 | orchestrator | Saturday 22 March 2025 22:37:26 +0000 (0:00:00.090) 0:00:05.390 ******** 2025-03-22 22:37:27.377648 | orchestrator | ok: [testbed-manager] 2025-03-22 22:37:27.378146 | orchestrator | 2025-03-22 22:37:27.378184 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-22 22:37:27.378905 | orchestrator | Saturday 22 March 2025 22:37:27 +0000 (0:00:01.354) 0:00:06.744 ******** 2025-03-22 22:37:27.431984 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:37:27.432359 | orchestrator | 2025-03-22 22:37:27.433384 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-22 22:37:27.434494 | orchestrator | Saturday 22 March 2025 22:37:27 +0000 (0:00:00.057) 0:00:06.801 ******** 2025-03-22 22:37:27.986946 | orchestrator | ok: [testbed-manager] 2025-03-22 22:37:27.987401 | orchestrator | 2025-03-22 22:37:27.987437 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-22 22:37:27.987499 | orchestrator | Saturday 22 March 2025 22:37:27 +0000 (0:00:00.553) 0:00:07.355 ******** 2025-03-22 22:37:28.063308 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:37:28.063790 | orchestrator | 2025-03-22 22:37:28.064674 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-22 22:37:28.688389 | orchestrator | Saturday 22 March 2025 22:37:28 +0000 (0:00:00.077) 0:00:07.432 ******** 2025-03-22 22:37:28.688537 | orchestrator | changed: [testbed-manager] 2025-03-22 22:37:28.688673 | orchestrator | 2025-03-22 22:37:28.689626 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-22 22:37:28.690238 | orchestrator | Saturday 22 March 2025 22:37:28 +0000 (0:00:00.625) 0:00:08.058 ******** 2025-03-22 22:37:30.017832 | orchestrator | changed: [testbed-manager] 2025-03-22 22:37:30.018667 | orchestrator | 2025-03-22 22:37:30.019808 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-22 22:37:30.019906 | orchestrator | Saturday 22 March 2025 22:37:30 +0000 (0:00:01.328) 0:00:09.386 ******** 2025-03-22 22:37:31.184649 | orchestrator | ok: [testbed-manager] 2025-03-22 22:37:31.184814 | orchestrator | 2025-03-22 22:37:31.185137 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-22 22:37:31.185478 | orchestrator | Saturday 22 March 2025 22:37:31 +0000 (0:00:01.166) 0:00:10.552 ******** 2025-03-22 22:37:31.282368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-03-22 22:37:31.283116 | orchestrator | 2025-03-22 22:37:31.283154 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-22 22:37:31.283850 | orchestrator | Saturday 22 March 2025 22:37:31 +0000 (0:00:00.098) 0:00:10.651 ******** 2025-03-22 22:37:32.633133 | orchestrator | changed: [testbed-manager] 2025-03-22 22:37:32.633614 | orchestrator | 2025-03-22 22:37:32.634383 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:37:32.635180 | orchestrator | 2025-03-22 22:37:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:37:32.635736 | orchestrator | 2025-03-22 22:37:32 | INFO  | Please wait and do not abort execution. 2025-03-22 22:37:32.635768 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-22 22:37:32.636451 | orchestrator | 2025-03-22 22:37:32.637354 | orchestrator | 2025-03-22 22:37:32.638011 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:37:32.638400 | orchestrator | Saturday 22 March 2025 22:37:32 +0000 (0:00:01.350) 0:00:12.002 ******** 2025-03-22 22:37:32.639049 | orchestrator | =============================================================================== 2025-03-22 22:37:32.639666 | orchestrator | Gathering Facts --------------------------------------------------------- 5.03s 2025-03-22 22:37:32.640155 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.35s 2025-03-22 22:37:32.640183 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.35s 2025-03-22 22:37:32.640460 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.33s 2025-03-22 22:37:32.641360 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.17s 2025-03-22 22:37:32.642098 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2025-03-22 22:37:32.642215 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2025-03-22 22:37:32.642720 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-03-22 22:37:32.642938 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-03-22 22:37:32.643278 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-03-22 22:37:32.643421 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-03-22 22:37:32.643973 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-03-22 22:37:32.644438 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-03-22 22:37:33.168917 | orchestrator | + osism apply sshconfig 2025-03-22 22:37:34.870991 | orchestrator | 2025-03-22 22:37:34 | INFO  | Task 323d051e-01ab-4d0d-a6ac-78ba78b3e5f7 (sshconfig) was prepared for execution. 2025-03-22 22:37:38.752689 | orchestrator | 2025-03-22 22:37:34 | INFO  | It takes a moment until task 323d051e-01ab-4d0d-a6ac-78ba78b3e5f7 (sshconfig) has been started and output is visible here. 2025-03-22 22:37:38.752831 | orchestrator | 2025-03-22 22:37:38.753774 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-03-22 22:37:38.753809 | orchestrator | 2025-03-22 22:37:38.755803 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-03-22 22:37:38.756420 | orchestrator | Saturday 22 March 2025 22:37:38 +0000 (0:00:00.135) 0:00:00.135 ******** 2025-03-22 22:37:39.406175 | orchestrator | ok: [testbed-manager] 2025-03-22 22:37:39.406673 | orchestrator | 2025-03-22 22:37:39.406743 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-03-22 22:37:39.406808 | orchestrator | Saturday 22 March 2025 22:37:39 +0000 (0:00:00.657) 0:00:00.793 ******** 2025-03-22 22:37:39.985474 | orchestrator | changed: [testbed-manager] 2025-03-22 22:37:39.986395 | orchestrator | 2025-03-22 22:37:39.986957 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-03-22 22:37:39.986990 | orchestrator | Saturday 22 March 2025 22:37:39 +0000 (0:00:00.578) 0:00:01.371 ******** 2025-03-22 22:37:46.283582 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-03-22 22:37:46.283956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-22 22:37:46.285529 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-03-22 22:37:46.286653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-03-22 22:37:46.288154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-03-22 22:37:46.289274 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-03-22 22:37:46.289484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-03-22 22:37:46.289514 | orchestrator | 2025-03-22 22:37:46.289796 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-03-22 22:37:46.290325 | orchestrator | Saturday 22 March 2025 22:37:46 +0000 (0:00:06.297) 0:00:07.669 ******** 2025-03-22 22:37:46.355888 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:37:46.356099 | orchestrator | 2025-03-22 22:37:46.356129 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-03-22 22:37:46.356319 | orchestrator | Saturday 22 March 2025 22:37:46 +0000 (0:00:00.072) 0:00:07.741 ******** 2025-03-22 22:37:46.996158 | orchestrator | changed: [testbed-manager] 2025-03-22 22:37:46.998063 | orchestrator | 2025-03-22 22:37:46.999562 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:37:46.999914 | orchestrator | 2025-03-22 22:37:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:37:46.999942 | orchestrator | 2025-03-22 22:37:46 | INFO  | Please wait and do not abort execution. 2025-03-22 22:37:46.999963 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:37:47.000894 | orchestrator | 2025-03-22 22:37:47.001338 | orchestrator | 2025-03-22 22:37:47.002935 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:37:47.003854 | orchestrator | Saturday 22 March 2025 22:37:46 +0000 (0:00:00.642) 0:00:08.384 ******** 2025-03-22 22:37:47.004444 | orchestrator | =============================================================================== 2025-03-22 22:37:47.004992 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.30s 2025-03-22 22:37:47.005833 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.66s 2025-03-22 22:37:47.006335 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.64s 2025-03-22 22:37:47.006792 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.58s 2025-03-22 22:37:47.007469 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-03-22 22:37:47.519757 | orchestrator | + osism apply known-hosts 2025-03-22 22:37:49.285572 | orchestrator | 2025-03-22 22:37:49 | INFO  | Task 9db8c75f-8b69-451b-bb8a-ff37c5164488 (known-hosts) was prepared for execution. 2025-03-22 22:37:52.834590 | orchestrator | 2025-03-22 22:37:49 | INFO  | It takes a moment until task 9db8c75f-8b69-451b-bb8a-ff37c5164488 (known-hosts) has been started and output is visible here. 2025-03-22 22:37:52.834731 | orchestrator | 2025-03-22 22:37:52.836906 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-03-22 22:37:52.838492 | orchestrator | 2025-03-22 22:37:52.838529 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-03-22 22:37:52.840166 | orchestrator | Saturday 22 March 2025 22:37:52 +0000 (0:00:00.132) 0:00:00.133 ******** 2025-03-22 22:37:58.982833 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-22 22:37:58.984625 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-22 22:37:58.985732 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-22 22:37:58.988222 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-22 22:37:58.988727 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-22 22:37:58.989501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-22 22:37:58.990624 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-22 22:37:58.991329 | orchestrator | 2025-03-22 22:37:58.992162 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-03-22 22:37:58.992984 | orchestrator | Saturday 22 March 2025 22:37:58 +0000 (0:00:06.148) 0:00:06.281 ******** 2025-03-22 22:37:59.212875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-22 22:37:59.214113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-22 22:37:59.214148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-22 22:37:59.214812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-22 22:37:59.215086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-22 22:37:59.215906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-22 22:37:59.216288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-22 22:37:59.216578 | orchestrator | 2025-03-22 22:37:59.217180 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:37:59.217812 | orchestrator | Saturday 22 March 2025 22:37:59 +0000 (0:00:00.228) 0:00:06.510 ******** 2025-03-22 22:38:00.526805 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf6lFyhXJvZTvBw2d2P4+XkPz/wwOdypfnHbLNuUCBPKOyb9CJXD3I17rZp4Bj3nfuuhp/WaAn6njGv3mv3qvtUGE9jhsTFf7CwDRWfmFYvcTkys5rJAH+xETsSzzae6gxsyQxZ1SUANKhwKvYSx1kuLFI9kUdLZ4Y5TKvnVrHrVSJSgvXJE9m6Us/NAItyYlwGIXhAIgz3KM21nxCMoE1UhBbOUaOOnkUxf0b12CMsEjJJUv7CcLtOJBrL2T0K0YcQ3FnNBXD5RWDmamK9ayT9SJJZjzORilAXyf1EWnvuaaGJ3q6hGJHmARCFJVxtWVQpsW5Yv854Y+j8p0LXxbClfBrP99JKx7CRYZV7PYQ8qIedy0kDfNYQQzlO/99Rr//x2XikODPGQ50zeM9Nufva0wkqjz+0WXBoZoMScENoMm2ljilj17LlK9WygqhDzcQh8RyQEUpM7Y1GpoWS1GGHky7MHQU1tm+s+kGeyT5V6EKBk0zspIFK9xKvFfJTHE=) 2025-03-22 22:38:00.528565 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB9VmE2fdE0v72nkSMVQthoHIOjyrbE44CGPYZ1uFAfJii2pTzkgALvL18hGqBICCn/GkaGp64Rc83bhrPjxef8=) 2025-03-22 22:38:00.530455 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMNcVl+1cf0gTEf2hO678PCZD+RNnGyMCSRDS7jTFRUJ) 2025-03-22 22:38:00.531109 | orchestrator | 2025-03-22 22:38:00.531490 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:00.533410 | orchestrator | Saturday 22 March 2025 22:38:00 +0000 (0:00:01.316) 0:00:07.827 ******** 2025-03-22 22:38:01.723481 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFWNcryga0Ev2sEMZhGNapUby8sm0C2dwVlVBhq1lixYcyfLVuGrIivATy897ZZHr1qQlaEWlPe3P1I6C+gKLcw=) 2025-03-22 22:38:01.723972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+rXfiOiaHxyYLWrMqoK0tZ4n66HdmjqSo4yfQ9twjrFSkbBlrBXYRGMetrPId0plHYJPSpZSmALFIERjBvGKMtjlArRPeBy4rBunQjoz4tjPYyr9/yeuDq0w/qayL+B3IQ6uUL6fAkDza+CdBMs4+7fVmsDqzqWHVLgstiIKBlI0MY6RFCdN97n9LOm+pnBvFuV3VZg+rJagdiDI2G0eGC0mT08ZCwya3nzpL7RbRq1kLCuLqqIZE9+RQPuTSb4wy/6j3vqTXI6fCojfHJJwvawXMwvUWy7D9kJ5hFOwkAH/y2PV7Hy9YC2kTa9jBi6EdOqF09VAk8KaE8Fj7h5LdblvN4YjmPTosjx4PnjUZ0XXRn7RH1MU+27aJfNlAvx6lxp7q/4f5Z1czhAlWyFGs0vRLhsnfQ/2pzuyAjj5WA+vSH86XzyK3P3bpOE7LSIstcrKpkmkamGeCLhnxNJ+8YXqg4GnORdAxNEnQhANOERQvEZKvTvbnOY/1SIjqfPE=) 2025-03-22 22:38:01.725620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII1WAX82bvjhmt2858kp56Gd7Eg5a2ki/6HdcJMNbJWi) 2025-03-22 22:38:01.726435 | orchestrator | 2025-03-22 22:38:01.728260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:01.728575 | orchestrator | Saturday 22 March 2025 22:38:01 +0000 (0:00:01.196) 0:00:09.023 ******** 2025-03-22 22:38:02.933647 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtn7y82nlkTH30iLcft1ywuOySn9kMKAu8AxNUw4YOHht75KUBxIBGcY66PchRQa/KQ7y2zTT2UIcWwlSV3RIJHUPebo4e4JMRfphLv+jEpJACKe5Alr71PcMMlIcbDE2iR0jNJwa3TXBoJhuMBhOimqrnrAa/3bzAMu0QZh13we7/fCbaRUpM1CtVHBQIqYPvwAYnSvvBIr19yxiUM9QlOyVj1Kh4r5F3o7wH+o4mDcLx1Z9LBPEqsXauYKXmZs+i0RH8Pvxq04gii7iXAMgVrsfewQETX+QHqHCiQ0JTEiUQV6kwp13No/xJiZFG/zArgQPTjRP+zL1uQmncM8kyDqm0aHSJtYa9NXyE27LgGWvqyaWNpRRKxLk++6GxcrFwGCC0NzFRZLIRGRs4ax/j4lGYGbd/S/GIcHNpiEXUOOBdI0mZyx0QOTUw4xr+1L1IRfSo4bwMQqxquUX79iTpKYIm1VzWGl6od50ykx4NEdTAgSzvqN+WmYdll5dBcjE=) 2025-03-22 22:38:02.933979 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpWdOELk323YN+pq0OWfE5lCNURbeWeKTTvrS1henwy56bPrLOOQCdGHr3QVprghfbxDbvo9aCXtXRdAgYnS/0=) 2025-03-22 22:38:02.934714 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtjnU3tR4x5ExfAAlFb6ewiJ85HGq4nn9g7PplKwyGw) 2025-03-22 22:38:02.935156 | orchestrator | 2025-03-22 22:38:02.935847 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:02.936177 | orchestrator | Saturday 22 March 2025 22:38:02 +0000 (0:00:01.209) 0:00:10.233 ******** 2025-03-22 22:38:04.199055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi37K5ZV0/J+mxu0NRegbFZ62eHVZQUcQh3njRKIyTduqVXbStPxWbPz3ATdbYqdcL3M73CQH3b3I+eotGYsqHwA34yiEJtFiWyC23JDYU69ULuhKCl51N/tkczc5whPJ0W6vuMM9EdRsNG3zQ6Vj0Gf4JEdOky9ni7NMiwPwQxtpp6AYMcmkZRe4qcy4UpAhYHQQzISCP2B9QvIrAAYIL5nvcAfEg/1crfzH2WILx29t86n59O4zD8ktUGwStA8ItjccJRsWAsUI4SDMlHZh1kt9kfAVjmk84KLYhBCwNhM1HK1nvn9SPyeHf3yZA5kY0I0LJ6ljUQvs47u7FxsXlwwL24+8PBuh8b6nE5X8kmK1l7UqQydVMt3/KsexWk8dvDWhzarQLdmD45R7y7lBsSvm3LJLdbbZP/4Gi0Sd9QIWF8bqYTCQN99XjEyqh2eLj4EYtrAUs8n+mqJ406P+VSXWRNruTv5/Y/CwQ0H8S8GB5BSUAR4QeOqSPa8aIR9k=) 2025-03-22 22:38:04.199437 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLYhcwSUZQYRm13I0hk/xYRYq79+/r4ZOTSeHLpKZ72Mm2m+0n2R7pliOqtE0EMRyZhFD4Gyv6bulZSOaI8wiGk=) 2025-03-22 22:38:04.199781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBiDRWraL79oZhoXItjyX1x2W17Hpvwuv+OegWmcUhcz) 2025-03-22 22:38:04.200343 | orchestrator | 2025-03-22 22:38:04.200621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:04.201349 | orchestrator | Saturday 22 March 2025 22:38:04 +0000 (0:00:01.265) 0:00:11.499 ******** 2025-03-22 22:38:05.382502 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf+CLho9dOGIfWVSAoogf29ebEee+kpD542A192EUph4cPmhx1fjxPDYNKtZpBxfhhDmvIZ/t5lGe8qLVBZ5zM5i/AuHXhwIcHi7GDV0mn9nbuuAdjpcgp/S6FdCoF7RX8yDYrOLzXVJTxFFitRURh+94B0l9YiJ/nV9i8EnQfNNvWQU97AdXO0C6oFiPMDU446RHRAKiGLqt9kd1AAZRmuw1tt87odTPqRjJw97fi0zVdMVOnBeZoFovFxX15BondX+YxJjxSM8REy5ZUTaAJPnTCpTySZnKmcAJbYckfJAy/Orle33sLM8q9jX9iVOY0lHk+nBtd2SfbBiBqXLHlwKon9CD1jelJrRIFO6rDTYtgIGxW7hEPNqyWT4myNI4W5RuVYQpXvp3RZgPCPBOco6gDRNm+afdz6mLCeBU3/v77HHHotRj/IcQzisnCJAerG4V5mI9T1OUpPT9yZ4WFvKuRqGYY6uA2UzI19D3qaUrCtuX2aG0dsEhCzjlkQ9E=) 2025-03-22 22:38:05.383018 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1mvze6joDiNTkfjWYsBfJRattHiT3l4wHx+jTbnlGprdCAujSzudt5x4KMEuBJZVyLr0CHjvLJCtnBWGC/ysI=) 2025-03-22 22:38:05.383812 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBB0d13l27CLnGUbyJEf1GBBBaaNaJB2FmgwM5mXiHUv) 2025-03-22 22:38:05.386329 | orchestrator | 2025-03-22 22:38:05.386544 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:05.386574 | orchestrator | Saturday 22 March 2025 22:38:05 +0000 (0:00:01.183) 0:00:12.683 ******** 2025-03-22 22:38:06.634752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLMpMP2VU3+6ynRpA+TX/4a5hiFE/REQKYK10Wm609bjiweMDJhr7UMc9/VeQVLnsKjxPZlqpsumlpPCITe5GxPAg5ejG/9+/p3+XhdNGAXvSz93Msg4WJrJAl98jXKp7wR4F1A8GdcUPKtQ++FlbIkJaHM995iD9YjviYXIc8azhDrDBvIStrxyIToliQYB90/ZJrtsslu2IsHPdQ/U5a7IDc/RycItj1omCkLfU9UsjbstUN4q3dFBhryCKZnxbFauc87lBEWqHaVLGqXLYouO0uO9KLo92BrYK7hX23m5ZFNxB7Xnp4X9ByG8Uf2WUapr+a44UjdAbkJBV0CT7RS6vN07nK/tW+6/1c8SPnn/mjSf+yKueX/onbl+x9XOsgJvqudtNtz06V4J9z9KEyqvvhUePHoQusxGDZHE7rDDF/I8Nl7nu2c59ps8VAPsPBavjmWcpIky6LAn4bfZeY92rvLYZSkLTqikLY1PEPELk/sDlBsavsDMCdg8CwjKc=) 2025-03-22 22:38:06.635727 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdf+jjsK/gfNYE7rZDuhc99cZqEmsnhtp5GdZDDObHRT3HTJnFavWRFURHYdA3XzuFvkP4Ieepmkh1UZaZk0KQ=) 2025-03-22 22:38:06.636657 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBn8t7SHsX+EVUEQ84zce/0FTNuUCeNywj7QFe0aMBzg) 2025-03-22 22:38:06.637584 | orchestrator | 2025-03-22 22:38:06.638147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:06.639287 | orchestrator | Saturday 22 March 2025 22:38:06 +0000 (0:00:01.249) 0:00:13.933 ******** 2025-03-22 22:38:07.859604 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZca4JCa5jjXQHW9ebbCBjOnIspNsOKoBo+/QMA8RxuSEJqjDyctDeQjOIdWB32plTjHzWU3ENWIwQHPUYP9hL/eK2Ou9bN+Vd5D3Xf8pFI4/R8WAikCZjC1gP5+qzUmExTAO8Vr28Y1JwDTikwgQPuE+LL2pt4cNN7YWKIIuLTShxgs8VLKc2QWVTSZ/3ekhRCQjJeqV1ypleh9tNVN61bQCH7KLBl/NyscsWcFht5Q2BasIOmNSjBf+xcZA+lmboPHb/FCk1EtNDEKC1h+BFyQMt/m0pLO5WNKYnktt71ckUkc9XRh517ug1Poi0uCXo4TX+L/FOpaWfqhAATMuwIWFbblbyLeHN9C7MmZseit9bGTq6ul41LrIpBOWKxgZbKEermQs1w/vOMyamkY16gYgQcjlcdEO52UeU21/H1hYMhQ5R5KBl6gkFOfi3wlFbZL/58TniPCiAkwFUrGwhm01NEw1CEfPvO8I46fkPOGTizjCLKzHBUjQsSy5JNzU=) 2025-03-22 22:38:07.859984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKkZsWb0wZhM2aMV7zH2DtO9MEH1I2rC4/YHGnoBgNeeaBmaDt+RnPccl2+oNBG5eJdf+LNGm009CAU14/GsQfo=) 2025-03-22 22:38:07.860038 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/xCrluJwUHq+0ZEYJMcvrV48lJuER92v67gLhju0Ct) 2025-03-22 22:38:07.860054 | orchestrator | 2025-03-22 22:38:07.860076 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-03-22 22:38:07.860953 | orchestrator | Saturday 22 March 2025 22:38:07 +0000 (0:00:01.224) 0:00:15.157 ******** 2025-03-22 22:38:13.492496 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-22 22:38:13.493386 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-22 22:38:13.493438 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-22 22:38:13.493663 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-22 22:38:13.493846 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-22 22:38:13.494157 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-22 22:38:13.494389 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-22 22:38:13.495366 | orchestrator | 2025-03-22 22:38:13.681778 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-03-22 22:38:13.681825 | orchestrator | Saturday 22 March 2025 22:38:13 +0000 (0:00:05.634) 0:00:20.792 ******** 2025-03-22 22:38:13.681848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-22 22:38:13.681903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-22 22:38:13.682462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-22 22:38:13.683005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-22 22:38:13.683047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-22 22:38:13.683252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-22 22:38:13.683549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-22 22:38:13.683890 | orchestrator | 2025-03-22 22:38:13.684175 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:13.684422 | orchestrator | Saturday 22 March 2025 22:38:13 +0000 (0:00:00.191) 0:00:20.984 ******** 2025-03-22 22:38:14.877423 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf6lFyhXJvZTvBw2d2P4+XkPz/wwOdypfnHbLNuUCBPKOyb9CJXD3I17rZp4Bj3nfuuhp/WaAn6njGv3mv3qvtUGE9jhsTFf7CwDRWfmFYvcTkys5rJAH+xETsSzzae6gxsyQxZ1SUANKhwKvYSx1kuLFI9kUdLZ4Y5TKvnVrHrVSJSgvXJE9m6Us/NAItyYlwGIXhAIgz3KM21nxCMoE1UhBbOUaOOnkUxf0b12CMsEjJJUv7CcLtOJBrL2T0K0YcQ3FnNBXD5RWDmamK9ayT9SJJZjzORilAXyf1EWnvuaaGJ3q6hGJHmARCFJVxtWVQpsW5Yv854Y+j8p0LXxbClfBrP99JKx7CRYZV7PYQ8qIedy0kDfNYQQzlO/99Rr//x2XikODPGQ50zeM9Nufva0wkqjz+0WXBoZoMScENoMm2ljilj17LlK9WygqhDzcQh8RyQEUpM7Y1GpoWS1GGHky7MHQU1tm+s+kGeyT5V6EKBk0zspIFK9xKvFfJTHE=) 2025-03-22 22:38:14.877571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB9VmE2fdE0v72nkSMVQthoHIOjyrbE44CGPYZ1uFAfJii2pTzkgALvL18hGqBICCn/GkaGp64Rc83bhrPjxef8=) 2025-03-22 22:38:14.880638 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMNcVl+1cf0gTEf2hO678PCZD+RNnGyMCSRDS7jTFRUJ) 2025-03-22 22:38:14.881361 | orchestrator | 2025-03-22 22:38:14.881393 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:14.881768 | orchestrator | Saturday 22 March 2025 22:38:14 +0000 (0:00:01.193) 0:00:22.178 ******** 2025-03-22 22:38:16.091435 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFWNcryga0Ev2sEMZhGNapUby8sm0C2dwVlVBhq1lixYcyfLVuGrIivATy897ZZHr1qQlaEWlPe3P1I6C+gKLcw=) 2025-03-22 22:38:16.092507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+rXfiOiaHxyYLWrMqoK0tZ4n66HdmjqSo4yfQ9twjrFSkbBlrBXYRGMetrPId0plHYJPSpZSmALFIERjBvGKMtjlArRPeBy4rBunQjoz4tjPYyr9/yeuDq0w/qayL+B3IQ6uUL6fAkDza+CdBMs4+7fVmsDqzqWHVLgstiIKBlI0MY6RFCdN97n9LOm+pnBvFuV3VZg+rJagdiDI2G0eGC0mT08ZCwya3nzpL7RbRq1kLCuLqqIZE9+RQPuTSb4wy/6j3vqTXI6fCojfHJJwvawXMwvUWy7D9kJ5hFOwkAH/y2PV7Hy9YC2kTa9jBi6EdOqF09VAk8KaE8Fj7h5LdblvN4YjmPTosjx4PnjUZ0XXRn7RH1MU+27aJfNlAvx6lxp7q/4f5Z1czhAlWyFGs0vRLhsnfQ/2pzuyAjj5WA+vSH86XzyK3P3bpOE7LSIstcrKpkmkamGeCLhnxNJ+8YXqg4GnORdAxNEnQhANOERQvEZKvTvbnOY/1SIjqfPE=) 2025-03-22 22:38:16.092559 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII1WAX82bvjhmt2858kp56Gd7Eg5a2ki/6HdcJMNbJWi) 2025-03-22 22:38:16.092572 | orchestrator | 2025-03-22 22:38:16.092588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:16.092945 | orchestrator | Saturday 22 March 2025 22:38:16 +0000 (0:00:01.212) 0:00:23.390 ******** 2025-03-22 22:38:17.302424 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtn7y82nlkTH30iLcft1ywuOySn9kMKAu8AxNUw4YOHht75KUBxIBGcY66PchRQa/KQ7y2zTT2UIcWwlSV3RIJHUPebo4e4JMRfphLv+jEpJACKe5Alr71PcMMlIcbDE2iR0jNJwa3TXBoJhuMBhOimqrnrAa/3bzAMu0QZh13we7/fCbaRUpM1CtVHBQIqYPvwAYnSvvBIr19yxiUM9QlOyVj1Kh4r5F3o7wH+o4mDcLx1Z9LBPEqsXauYKXmZs+i0RH8Pvxq04gii7iXAMgVrsfewQETX+QHqHCiQ0JTEiUQV6kwp13No/xJiZFG/zArgQPTjRP+zL1uQmncM8kyDqm0aHSJtYa9NXyE27LgGWvqyaWNpRRKxLk++6GxcrFwGCC0NzFRZLIRGRs4ax/j4lGYGbd/S/GIcHNpiEXUOOBdI0mZyx0QOTUw4xr+1L1IRfSo4bwMQqxquUX79iTpKYIm1VzWGl6od50ykx4NEdTAgSzvqN+WmYdll5dBcjE=) 2025-03-22 22:38:17.303109 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpWdOELk323YN+pq0OWfE5lCNURbeWeKTTvrS1henwy56bPrLOOQCdGHr3QVprghfbxDbvo9aCXtXRdAgYnS/0=) 2025-03-22 22:38:17.303990 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtjnU3tR4x5ExfAAlFb6ewiJ85HGq4nn9g7PplKwyGw) 2025-03-22 22:38:17.304974 | orchestrator | 2025-03-22 22:38:17.305307 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:17.306168 | orchestrator | Saturday 22 March 2025 22:38:17 +0000 (0:00:01.212) 0:00:24.603 ******** 2025-03-22 22:38:18.492744 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLYhcwSUZQYRm13I0hk/xYRYq79+/r4ZOTSeHLpKZ72Mm2m+0n2R7pliOqtE0EMRyZhFD4Gyv6bulZSOaI8wiGk=) 2025-03-22 22:38:18.493558 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCi37K5ZV0/J+mxu0NRegbFZ62eHVZQUcQh3njRKIyTduqVXbStPxWbPz3ATdbYqdcL3M73CQH3b3I+eotGYsqHwA34yiEJtFiWyC23JDYU69ULuhKCl51N/tkczc5whPJ0W6vuMM9EdRsNG3zQ6Vj0Gf4JEdOky9ni7NMiwPwQxtpp6AYMcmkZRe4qcy4UpAhYHQQzISCP2B9QvIrAAYIL5nvcAfEg/1crfzH2WILx29t86n59O4zD8ktUGwStA8ItjccJRsWAsUI4SDMlHZh1kt9kfAVjmk84KLYhBCwNhM1HK1nvn9SPyeHf3yZA5kY0I0LJ6ljUQvs47u7FxsXlwwL24+8PBuh8b6nE5X8kmK1l7UqQydVMt3/KsexWk8dvDWhzarQLdmD45R7y7lBsSvm3LJLdbbZP/4Gi0Sd9QIWF8bqYTCQN99XjEyqh2eLj4EYtrAUs8n+mqJ406P+VSXWRNruTv5/Y/CwQ0H8S8GB5BSUAR4QeOqSPa8aIR9k=) 2025-03-22 22:38:18.493616 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBiDRWraL79oZhoXItjyX1x2W17Hpvwuv+OegWmcUhcz) 2025-03-22 22:38:18.494134 | orchestrator | 2025-03-22 22:38:18.494745 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:18.495151 | orchestrator | Saturday 22 March 2025 22:38:18 +0000 (0:00:01.189) 0:00:25.792 ******** 2025-03-22 22:38:19.760353 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf+CLho9dOGIfWVSAoogf29ebEee+kpD542A192EUph4cPmhx1fjxPDYNKtZpBxfhhDmvIZ/t5lGe8qLVBZ5zM5i/AuHXhwIcHi7GDV0mn9nbuuAdjpcgp/S6FdCoF7RX8yDYrOLzXVJTxFFitRURh+94B0l9YiJ/nV9i8EnQfNNvWQU97AdXO0C6oFiPMDU446RHRAKiGLqt9kd1AAZRmuw1tt87odTPqRjJw97fi0zVdMVOnBeZoFovFxX15BondX+YxJjxSM8REy5ZUTaAJPnTCpTySZnKmcAJbYckfJAy/Orle33sLM8q9jX9iVOY0lHk+nBtd2SfbBiBqXLHlwKon9CD1jelJrRIFO6rDTYtgIGxW7hEPNqyWT4myNI4W5RuVYQpXvp3RZgPCPBOco6gDRNm+afdz6mLCeBU3/v77HHHotRj/IcQzisnCJAerG4V5mI9T1OUpPT9yZ4WFvKuRqGYY6uA2UzI19D3qaUrCtuX2aG0dsEhCzjlkQ9E=) 2025-03-22 22:38:19.760561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1mvze6joDiNTkfjWYsBfJRattHiT3l4wHx+jTbnlGprdCAujSzudt5x4KMEuBJZVyLr0CHjvLJCtnBWGC/ysI=) 2025-03-22 22:38:19.761958 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBB0d13l27CLnGUbyJEf1GBBBaaNaJB2FmgwM5mXiHUv) 2025-03-22 22:38:19.761993 | orchestrator | 2025-03-22 22:38:19.762338 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:19.762390 | orchestrator | Saturday 22 March 2025 22:38:19 +0000 (0:00:01.267) 0:00:27.060 ******** 2025-03-22 22:38:21.064906 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLMpMP2VU3+6ynRpA+TX/4a5hiFE/REQKYK10Wm609bjiweMDJhr7UMc9/VeQVLnsKjxPZlqpsumlpPCITe5GxPAg5ejG/9+/p3+XhdNGAXvSz93Msg4WJrJAl98jXKp7wR4F1A8GdcUPKtQ++FlbIkJaHM995iD9YjviYXIc8azhDrDBvIStrxyIToliQYB90/ZJrtsslu2IsHPdQ/U5a7IDc/RycItj1omCkLfU9UsjbstUN4q3dFBhryCKZnxbFauc87lBEWqHaVLGqXLYouO0uO9KLo92BrYK7hX23m5ZFNxB7Xnp4X9ByG8Uf2WUapr+a44UjdAbkJBV0CT7RS6vN07nK/tW+6/1c8SPnn/mjSf+yKueX/onbl+x9XOsgJvqudtNtz06V4J9z9KEyqvvhUePHoQusxGDZHE7rDDF/I8Nl7nu2c59ps8VAPsPBavjmWcpIky6LAn4bfZeY92rvLYZSkLTqikLY1PEPELk/sDlBsavsDMCdg8CwjKc=) 2025-03-22 22:38:21.065240 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdf+jjsK/gfNYE7rZDuhc99cZqEmsnhtp5GdZDDObHRT3HTJnFavWRFURHYdA3XzuFvkP4Ieepmkh1UZaZk0KQ=) 2025-03-22 22:38:21.065912 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBn8t7SHsX+EVUEQ84zce/0FTNuUCeNywj7QFe0aMBzg) 2025-03-22 22:38:21.066593 | orchestrator | 2025-03-22 22:38:21.067538 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-22 22:38:21.068413 | orchestrator | Saturday 22 March 2025 22:38:21 +0000 (0:00:01.302) 0:00:28.362 ******** 2025-03-22 22:38:22.293519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZca4JCa5jjXQHW9ebbCBjOnIspNsOKoBo+/QMA8RxuSEJqjDyctDeQjOIdWB32plTjHzWU3ENWIwQHPUYP9hL/eK2Ou9bN+Vd5D3Xf8pFI4/R8WAikCZjC1gP5+qzUmExTAO8Vr28Y1JwDTikwgQPuE+LL2pt4cNN7YWKIIuLTShxgs8VLKc2QWVTSZ/3ekhRCQjJeqV1ypleh9tNVN61bQCH7KLBl/NyscsWcFht5Q2BasIOmNSjBf+xcZA+lmboPHb/FCk1EtNDEKC1h+BFyQMt/m0pLO5WNKYnktt71ckUkc9XRh517ug1Poi0uCXo4TX+L/FOpaWfqhAATMuwIWFbblbyLeHN9C7MmZseit9bGTq6ul41LrIpBOWKxgZbKEermQs1w/vOMyamkY16gYgQcjlcdEO52UeU21/H1hYMhQ5R5KBl6gkFOfi3wlFbZL/58TniPCiAkwFUrGwhm01NEw1CEfPvO8I46fkPOGTizjCLKzHBUjQsSy5JNzU=) 2025-03-22 22:38:22.293684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKkZsWb0wZhM2aMV7zH2DtO9MEH1I2rC4/YHGnoBgNeeaBmaDt+RnPccl2+oNBG5eJdf+LNGm009CAU14/GsQfo=) 2025-03-22 22:38:22.293717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/xCrluJwUHq+0ZEYJMcvrV48lJuER92v67gLhju0Ct) 2025-03-22 22:38:22.296305 | orchestrator | 2025-03-22 22:38:22.505055 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-03-22 22:38:22.505110 | orchestrator | Saturday 22 March 2025 22:38:22 +0000 (0:00:01.229) 0:00:29.592 ******** 2025-03-22 22:38:22.505134 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-22 22:38:22.505602 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-22 22:38:22.506521 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-22 22:38:22.507432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-22 22:38:22.508117 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-22 22:38:22.509701 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-22 22:38:22.509932 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-22 22:38:22.510745 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:38:22.511737 | orchestrator | 2025-03-22 22:38:22.512288 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-03-22 22:38:22.513026 | orchestrator | Saturday 22 March 2025 22:38:22 +0000 (0:00:00.213) 0:00:29.806 ******** 2025-03-22 22:38:22.691603 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:38:22.691886 | orchestrator | 2025-03-22 22:38:22.692385 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-03-22 22:38:22.692521 | orchestrator | Saturday 22 March 2025 22:38:22 +0000 (0:00:00.187) 0:00:29.994 ******** 2025-03-22 22:38:22.758081 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:38:22.758794 | orchestrator | 2025-03-22 22:38:22.759800 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-03-22 22:38:22.761173 | orchestrator | Saturday 22 March 2025 22:38:22 +0000 (0:00:00.066) 0:00:30.060 ******** 2025-03-22 22:38:23.339573 | orchestrator | changed: [testbed-manager] 2025-03-22 22:38:23.339976 | orchestrator | 2025-03-22 22:38:23.340009 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:38:23.340386 | orchestrator | 2025-03-22 22:38:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:38:23.340772 | orchestrator | 2025-03-22 22:38:23 | INFO  | Please wait and do not abort execution. 2025-03-22 22:38:23.340804 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-22 22:38:23.341799 | orchestrator | 2025-03-22 22:38:23.342217 | orchestrator | 2025-03-22 22:38:23.342702 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:38:23.342779 | orchestrator | Saturday 22 March 2025 22:38:23 +0000 (0:00:00.579) 0:00:30.640 ******** 2025-03-22 22:38:23.343634 | orchestrator | =============================================================================== 2025-03-22 22:38:23.344345 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.15s 2025-03-22 22:38:23.344373 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.63s 2025-03-22 22:38:23.344917 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2025-03-22 22:38:23.345463 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.30s 2025-03-22 22:38:23.346109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2025-03-22 22:38:23.346914 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2025-03-22 22:38:23.347360 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2025-03-22 22:38:23.347860 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-03-22 22:38:23.348132 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-03-22 22:38:23.348425 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-22 22:38:23.348708 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-22 22:38:23.349219 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-22 22:38:23.349329 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-03-22 22:38:23.349767 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-03-22 22:38:23.350261 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-03-22 22:38:23.350582 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-03-22 22:38:23.350997 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.58s 2025-03-22 22:38:23.351279 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.23s 2025-03-22 22:38:23.351665 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.21s 2025-03-22 22:38:23.352054 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-03-22 22:38:23.867302 | orchestrator | + osism apply squid 2025-03-22 22:38:25.674576 | orchestrator | 2025-03-22 22:38:25 | INFO  | Task 104457dc-bc7d-44f2-9d23-cb4ee674d3af (squid) was prepared for execution. 2025-03-22 22:38:29.539186 | orchestrator | 2025-03-22 22:38:25 | INFO  | It takes a moment until task 104457dc-bc7d-44f2-9d23-cb4ee674d3af (squid) has been started and output is visible here. 2025-03-22 22:38:29.540101 | orchestrator | 2025-03-22 22:38:29.540224 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-03-22 22:38:29.540251 | orchestrator | 2025-03-22 22:38:29.540828 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-03-22 22:38:29.542864 | orchestrator | Saturday 22 March 2025 22:38:29 +0000 (0:00:00.140) 0:00:00.140 ******** 2025-03-22 22:38:29.637386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-03-22 22:38:29.637539 | orchestrator | 2025-03-22 22:38:29.638633 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-03-22 22:38:29.639238 | orchestrator | Saturday 22 March 2025 22:38:29 +0000 (0:00:00.099) 0:00:00.239 ******** 2025-03-22 22:38:31.271803 | orchestrator | ok: [testbed-manager] 2025-03-22 22:38:31.271970 | orchestrator | 2025-03-22 22:38:31.272938 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-03-22 22:38:31.274114 | orchestrator | Saturday 22 March 2025 22:38:31 +0000 (0:00:01.633) 0:00:01.872 ******** 2025-03-22 22:38:32.604760 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-03-22 22:38:32.605158 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-03-22 22:38:32.605269 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-03-22 22:38:32.605452 | orchestrator | 2025-03-22 22:38:32.606229 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-03-22 22:38:32.606666 | orchestrator | Saturday 22 March 2025 22:38:32 +0000 (0:00:01.334) 0:00:03.206 ******** 2025-03-22 22:38:33.788811 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-03-22 22:38:33.789289 | orchestrator | 2025-03-22 22:38:33.789982 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-03-22 22:38:33.790830 | orchestrator | Saturday 22 March 2025 22:38:33 +0000 (0:00:01.169) 0:00:04.376 ******** 2025-03-22 22:38:34.194905 | orchestrator | ok: [testbed-manager] 2025-03-22 22:38:34.195246 | orchestrator | 2025-03-22 22:38:34.195768 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-03-22 22:38:34.196032 | orchestrator | Saturday 22 March 2025 22:38:34 +0000 (0:00:00.418) 0:00:04.795 ******** 2025-03-22 22:38:35.241689 | orchestrator | changed: [testbed-manager] 2025-03-22 22:38:35.242097 | orchestrator | 2025-03-22 22:38:35.243494 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-03-22 22:38:35.243862 | orchestrator | Saturday 22 March 2025 22:38:35 +0000 (0:00:01.048) 0:00:05.843 ******** 2025-03-22 22:39:03.219000 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-03-22 22:39:15.743572 | orchestrator | ok: [testbed-manager] 2025-03-22 22:39:15.743670 | orchestrator | 2025-03-22 22:39:15.743681 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-03-22 22:39:15.743688 | orchestrator | Saturday 22 March 2025 22:39:03 +0000 (0:00:27.973) 0:00:33.816 ******** 2025-03-22 22:39:15.743706 | orchestrator | changed: [testbed-manager] 2025-03-22 22:40:15.817098 | orchestrator | 2025-03-22 22:40:15.817288 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-03-22 22:40:15.817311 | orchestrator | Saturday 22 March 2025 22:39:15 +0000 (0:00:12.523) 0:00:46.340 ******** 2025-03-22 22:40:15.817343 | orchestrator | Pausing for 60 seconds 2025-03-22 22:40:15.902188 | orchestrator | changed: [testbed-manager] 2025-03-22 22:40:15.902307 | orchestrator | 2025-03-22 22:40:15.902326 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-03-22 22:40:15.902343 | orchestrator | Saturday 22 March 2025 22:40:15 +0000 (0:01:00.075) 0:01:46.415 ******** 2025-03-22 22:40:15.902372 | orchestrator | ok: [testbed-manager] 2025-03-22 22:40:15.903357 | orchestrator | 2025-03-22 22:40:15.903588 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-03-22 22:40:15.904777 | orchestrator | Saturday 22 March 2025 22:40:15 +0000 (0:00:00.089) 0:01:46.505 ******** 2025-03-22 22:40:16.589612 | orchestrator | changed: [testbed-manager] 2025-03-22 22:40:16.589990 | orchestrator | 2025-03-22 22:40:16.590113 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:40:16.590134 | orchestrator | 2025-03-22 22:40:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:40:16.590613 | orchestrator | 2025-03-22 22:40:16 | INFO  | Please wait and do not abort execution. 2025-03-22 22:40:16.590633 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:40:16.591363 | orchestrator | 2025-03-22 22:40:16.591788 | orchestrator | 2025-03-22 22:40:16.592245 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:40:16.592884 | orchestrator | Saturday 22 March 2025 22:40:16 +0000 (0:00:00.688) 0:01:47.193 ******** 2025-03-22 22:40:16.593405 | orchestrator | =============================================================================== 2025-03-22 22:40:16.593833 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-03-22 22:40:16.594703 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 27.97s 2025-03-22 22:40:16.595594 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.52s 2025-03-22 22:40:16.596256 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.63s 2025-03-22 22:40:16.596452 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.33s 2025-03-22 22:40:16.597223 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.17s 2025-03-22 22:40:16.597519 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.05s 2025-03-22 22:40:16.597910 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.69s 2025-03-22 22:40:16.598235 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.42s 2025-03-22 22:40:16.598646 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-03-22 22:40:16.598921 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2025-03-22 22:40:17.137165 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-03-22 22:40:17.137647 | orchestrator | ++ semver latest 9.0.0 2025-03-22 22:40:17.193246 | orchestrator | + [[ -1 -lt 0 ]] 2025-03-22 22:40:17.193610 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-03-22 22:40:17.193645 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-03-22 22:40:18.864747 | orchestrator | 2025-03-22 22:40:18 | INFO  | Task 3c9084c9-159e-4054-8051-e8f0f27da9c7 (operator) was prepared for execution. 2025-03-22 22:40:22.619549 | orchestrator | 2025-03-22 22:40:18 | INFO  | It takes a moment until task 3c9084c9-159e-4054-8051-e8f0f27da9c7 (operator) has been started and output is visible here. 2025-03-22 22:40:22.619694 | orchestrator | 2025-03-22 22:40:22.623990 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-03-22 22:40:22.624022 | orchestrator | 2025-03-22 22:40:22.624047 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-22 22:40:26.115909 | orchestrator | Saturday 22 March 2025 22:40:22 +0000 (0:00:00.124) 0:00:00.124 ******** 2025-03-22 22:40:26.116081 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:26.116157 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:26.117282 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:40:26.118400 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:40:26.119456 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:40:26.120289 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:26.120954 | orchestrator | 2025-03-22 22:40:26.121321 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-03-22 22:40:26.122202 | orchestrator | Saturday 22 March 2025 22:40:26 +0000 (0:00:03.502) 0:00:03.626 ******** 2025-03-22 22:40:26.989622 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:40:26.989871 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:40:26.989914 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:26.989935 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:40:26.990856 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:26.991278 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:26.991969 | orchestrator | 2025-03-22 22:40:26.994101 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-03-22 22:40:26.994270 | orchestrator | 2025-03-22 22:40:26.996590 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-22 22:40:26.998466 | orchestrator | Saturday 22 March 2025 22:40:26 +0000 (0:00:00.872) 0:00:04.499 ******** 2025-03-22 22:40:27.054776 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:40:27.110169 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:40:27.143924 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:40:27.197949 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:27.201682 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:27.201710 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:27.201725 | orchestrator | 2025-03-22 22:40:27.201740 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-22 22:40:27.201760 | orchestrator | Saturday 22 March 2025 22:40:27 +0000 (0:00:00.203) 0:00:04.703 ******** 2025-03-22 22:40:27.272605 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:40:27.297163 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:40:27.323927 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:40:27.379342 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:27.384134 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:27.384310 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:27.384365 | orchestrator | 2025-03-22 22:40:27.385941 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-22 22:40:27.386502 | orchestrator | Saturday 22 March 2025 22:40:27 +0000 (0:00:00.186) 0:00:04.889 ******** 2025-03-22 22:40:28.069708 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:28.070624 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:28.070661 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:28.071197 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:28.072545 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:28.073146 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:28.074800 | orchestrator | 2025-03-22 22:40:28.075528 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-22 22:40:28.075913 | orchestrator | Saturday 22 March 2025 22:40:28 +0000 (0:00:00.690) 0:00:05.580 ******** 2025-03-22 22:40:28.943918 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:28.945171 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:28.945242 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:28.946251 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:28.946601 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:28.947292 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:28.948021 | orchestrator | 2025-03-22 22:40:28.948427 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-22 22:40:28.948879 | orchestrator | Saturday 22 March 2025 22:40:28 +0000 (0:00:00.873) 0:00:06.453 ******** 2025-03-22 22:40:30.290323 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-03-22 22:40:30.291260 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-03-22 22:40:30.292028 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-03-22 22:40:30.293023 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-03-22 22:40:30.296016 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-03-22 22:40:30.296083 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-03-22 22:40:30.297072 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-03-22 22:40:30.297882 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-03-22 22:40:30.298642 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-03-22 22:40:30.300000 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-03-22 22:40:30.301172 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-03-22 22:40:30.302325 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-03-22 22:40:30.303451 | orchestrator | 2025-03-22 22:40:30.304490 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-22 22:40:30.304807 | orchestrator | Saturday 22 March 2025 22:40:30 +0000 (0:00:01.345) 0:00:07.798 ******** 2025-03-22 22:40:31.589330 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:31.590286 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:31.590348 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:31.590517 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:31.590563 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:31.591023 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:31.591122 | orchestrator | 2025-03-22 22:40:31.591376 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-22 22:40:31.591733 | orchestrator | Saturday 22 March 2025 22:40:31 +0000 (0:00:01.299) 0:00:09.098 ******** 2025-03-22 22:40:32.786918 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-03-22 22:40:32.788047 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-03-22 22:40:32.788969 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-03-22 22:40:33.015688 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:40:33.015834 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:40:33.015855 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:40:33.015870 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:40:33.015884 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:40:33.015899 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-03-22 22:40:33.015913 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-03-22 22:40:33.015932 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-03-22 22:40:33.016464 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-03-22 22:40:33.016632 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-03-22 22:40:33.017323 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-03-22 22:40:33.017997 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-03-22 22:40:33.018291 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:40:33.018896 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:40:33.019435 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:40:33.020017 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:40:33.021168 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:40:33.021915 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-03-22 22:40:33.022494 | orchestrator | 2025-03-22 22:40:33.022938 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-22 22:40:33.023916 | orchestrator | Saturday 22 March 2025 22:40:33 +0000 (0:00:01.425) 0:00:10.523 ******** 2025-03-22 22:40:33.598969 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:33.599123 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:33.599149 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:33.599575 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:33.600236 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:33.601436 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:33.602667 | orchestrator | 2025-03-22 22:40:33.604735 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-22 22:40:33.606091 | orchestrator | Saturday 22 March 2025 22:40:33 +0000 (0:00:00.584) 0:00:11.108 ******** 2025-03-22 22:40:33.682061 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:40:33.743190 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:40:33.812912 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:40:33.812990 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:40:33.816893 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:40:33.816944 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:40:33.816955 | orchestrator | 2025-03-22 22:40:33.817857 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-22 22:40:33.818674 | orchestrator | Saturday 22 March 2025 22:40:33 +0000 (0:00:00.213) 0:00:11.322 ******** 2025-03-22 22:40:34.662153 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-22 22:40:34.662532 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:34.662561 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-22 22:40:34.662577 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:34.662614 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-22 22:40:34.664939 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-22 22:40:34.666123 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:34.666141 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:34.666154 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-22 22:40:34.666232 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:34.666636 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-22 22:40:34.667010 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:34.667346 | orchestrator | 2025-03-22 22:40:34.667657 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-22 22:40:34.668969 | orchestrator | Saturday 22 March 2025 22:40:34 +0000 (0:00:00.849) 0:00:12.171 ******** 2025-03-22 22:40:34.715075 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:40:34.740034 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:40:34.767065 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:40:34.792743 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:40:34.838281 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:40:34.838679 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:40:34.839292 | orchestrator | 2025-03-22 22:40:34.840247 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-22 22:40:34.841159 | orchestrator | Saturday 22 March 2025 22:40:34 +0000 (0:00:00.177) 0:00:12.348 ******** 2025-03-22 22:40:34.900392 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:40:34.927351 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:40:34.963133 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:40:34.984511 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:40:35.026949 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:40:35.027416 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:40:35.028411 | orchestrator | 2025-03-22 22:40:35.029037 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-22 22:40:35.032273 | orchestrator | Saturday 22 March 2025 22:40:35 +0000 (0:00:00.188) 0:00:12.537 ******** 2025-03-22 22:40:35.086608 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:40:35.113113 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:40:35.140649 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:40:35.211863 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:40:35.212519 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:40:35.212585 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:40:35.212946 | orchestrator | 2025-03-22 22:40:35.213875 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-22 22:40:35.213982 | orchestrator | Saturday 22 March 2025 22:40:35 +0000 (0:00:00.183) 0:00:12.721 ******** 2025-03-22 22:40:35.968794 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:35.968963 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:35.969249 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:35.969545 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:35.970276 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:35.970721 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:35.971038 | orchestrator | 2025-03-22 22:40:35.972463 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-22 22:40:36.057271 | orchestrator | Saturday 22 March 2025 22:40:35 +0000 (0:00:00.757) 0:00:13.478 ******** 2025-03-22 22:40:36.057395 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:40:36.116571 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:40:36.261059 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:40:36.262167 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:40:36.263478 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:40:36.264456 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:40:36.265286 | orchestrator | 2025-03-22 22:40:36.266665 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:40:36.266916 | orchestrator | 2025-03-22 22:40:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:40:36.267187 | orchestrator | 2025-03-22 22:40:36 | INFO  | Please wait and do not abort execution. 2025-03-22 22:40:36.268734 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:40:36.269175 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:40:36.270313 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:40:36.271752 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:40:36.272636 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:40:36.274746 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:40:36.275750 | orchestrator | 2025-03-22 22:40:36.276576 | orchestrator | 2025-03-22 22:40:36.277825 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:40:36.278965 | orchestrator | Saturday 22 March 2025 22:40:36 +0000 (0:00:00.292) 0:00:13.771 ******** 2025-03-22 22:40:36.279698 | orchestrator | =============================================================================== 2025-03-22 22:40:36.280652 | orchestrator | Gathering Facts --------------------------------------------------------- 3.50s 2025-03-22 22:40:36.281763 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.43s 2025-03-22 22:40:36.283070 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.35s 2025-03-22 22:40:36.284283 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2025-03-22 22:40:36.285093 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-03-22 22:40:36.286419 | orchestrator | Do not require tty for all users ---------------------------------------- 0.87s 2025-03-22 22:40:36.287135 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.85s 2025-03-22 22:40:36.287946 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.76s 2025-03-22 22:40:36.288647 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.69s 2025-03-22 22:40:36.289376 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-03-22 22:40:36.290091 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2025-03-22 22:40:36.290696 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-03-22 22:40:36.291635 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-03-22 22:40:36.292414 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-03-22 22:40:36.293269 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-03-22 22:40:36.293837 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-03-22 22:40:36.294817 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-03-22 22:40:36.762372 | orchestrator | + osism apply --environment custom facts 2025-03-22 22:40:38.302594 | orchestrator | 2025-03-22 22:40:38 | INFO  | Trying to run play facts in environment custom 2025-03-22 22:40:38.352325 | orchestrator | 2025-03-22 22:40:38 | INFO  | Task 03cc3d10-6382-4e39-a9ac-97a163afb4cd (facts) was prepared for execution. 2025-03-22 22:40:41.967739 | orchestrator | 2025-03-22 22:40:38 | INFO  | It takes a moment until task 03cc3d10-6382-4e39-a9ac-97a163afb4cd (facts) has been started and output is visible here. 2025-03-22 22:40:41.968739 | orchestrator | 2025-03-22 22:40:41.968828 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-22 22:40:41.968848 | orchestrator | 2025-03-22 22:40:41.968867 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-22 22:40:41.972334 | orchestrator | Saturday 22 March 2025 22:40:41 +0000 (0:00:00.102) 0:00:00.102 ******** 2025-03-22 22:40:43.464794 | orchestrator | ok: [testbed-manager] 2025-03-22 22:40:43.467274 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:43.468381 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:43.468415 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:43.472676 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:43.472704 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:43.472718 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:43.472738 | orchestrator | 2025-03-22 22:40:43.473258 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-03-22 22:40:43.474132 | orchestrator | Saturday 22 March 2025 22:40:43 +0000 (0:00:01.495) 0:00:01.598 ******** 2025-03-22 22:40:44.828855 | orchestrator | ok: [testbed-manager] 2025-03-22 22:40:44.830580 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:40:44.830620 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:40:44.830643 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:40:44.831399 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:44.831425 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:44.831446 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:44.831914 | orchestrator | 2025-03-22 22:40:44.832421 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-03-22 22:40:44.832964 | orchestrator | 2025-03-22 22:40:44.833393 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-22 22:40:44.833819 | orchestrator | Saturday 22 March 2025 22:40:44 +0000 (0:00:01.364) 0:00:02.963 ******** 2025-03-22 22:40:44.931252 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:44.932255 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:44.933257 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:44.934090 | orchestrator | 2025-03-22 22:40:44.934654 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-22 22:40:44.934841 | orchestrator | Saturday 22 March 2025 22:40:44 +0000 (0:00:00.105) 0:00:03.068 ******** 2025-03-22 22:40:45.108201 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:45.108710 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:45.109358 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:45.109802 | orchestrator | 2025-03-22 22:40:45.113584 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-22 22:40:45.114095 | orchestrator | Saturday 22 March 2025 22:40:45 +0000 (0:00:00.176) 0:00:03.244 ******** 2025-03-22 22:40:45.248390 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:45.250635 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:45.251576 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:45.255073 | orchestrator | 2025-03-22 22:40:45.255683 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-22 22:40:45.255707 | orchestrator | Saturday 22 March 2025 22:40:45 +0000 (0:00:00.140) 0:00:03.385 ******** 2025-03-22 22:40:45.427689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:40:45.428167 | orchestrator | 2025-03-22 22:40:45.428621 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-22 22:40:45.429305 | orchestrator | Saturday 22 March 2025 22:40:45 +0000 (0:00:00.178) 0:00:03.563 ******** 2025-03-22 22:40:45.889799 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:45.889941 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:45.890239 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:45.890701 | orchestrator | 2025-03-22 22:40:45.890970 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-22 22:40:45.891292 | orchestrator | Saturday 22 March 2025 22:40:45 +0000 (0:00:00.462) 0:00:04.026 ******** 2025-03-22 22:40:46.032613 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:40:46.032775 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:40:46.033708 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:40:46.034513 | orchestrator | 2025-03-22 22:40:46.034936 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-22 22:40:46.035729 | orchestrator | Saturday 22 March 2025 22:40:46 +0000 (0:00:00.142) 0:00:04.168 ******** 2025-03-22 22:40:47.153832 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:47.155019 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:47.156524 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:47.157646 | orchestrator | 2025-03-22 22:40:47.160565 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-22 22:40:47.161624 | orchestrator | Saturday 22 March 2025 22:40:47 +0000 (0:00:01.120) 0:00:05.289 ******** 2025-03-22 22:40:47.619144 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:40:47.619799 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:40:47.620736 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:40:47.620846 | orchestrator | 2025-03-22 22:40:47.622531 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-22 22:40:48.715872 | orchestrator | Saturday 22 March 2025 22:40:47 +0000 (0:00:00.465) 0:00:05.755 ******** 2025-03-22 22:40:48.715997 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:40:48.716428 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:40:48.717679 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:40:48.718067 | orchestrator | 2025-03-22 22:40:48.720712 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-22 22:40:48.721381 | orchestrator | Saturday 22 March 2025 22:40:48 +0000 (0:00:01.095) 0:00:06.850 ******** 2025-03-22 22:41:04.285889 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:04.287878 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:04.289305 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:04.289341 | orchestrator | 2025-03-22 22:41:04.293991 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-03-22 22:41:04.381974 | orchestrator | Saturday 22 March 2025 22:41:04 +0000 (0:00:15.567) 0:00:22.418 ******** 2025-03-22 22:41:04.382013 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:04.383029 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:04.383576 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:04.384299 | orchestrator | 2025-03-22 22:41:04.384375 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-03-22 22:41:04.385100 | orchestrator | Saturday 22 March 2025 22:41:04 +0000 (0:00:00.100) 0:00:22.519 ******** 2025-03-22 22:41:12.687549 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:12.687978 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:12.689397 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:12.689528 | orchestrator | 2025-03-22 22:41:12.689919 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-22 22:41:12.690314 | orchestrator | Saturday 22 March 2025 22:41:12 +0000 (0:00:08.301) 0:00:30.820 ******** 2025-03-22 22:41:13.191759 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:13.191991 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:13.192020 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:13.193267 | orchestrator | 2025-03-22 22:41:13.193609 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-22 22:41:13.194788 | orchestrator | Saturday 22 March 2025 22:41:13 +0000 (0:00:00.506) 0:00:31.327 ******** 2025-03-22 22:41:17.037791 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-03-22 22:41:17.037959 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-03-22 22:41:17.037987 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-03-22 22:41:17.038342 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-03-22 22:41:17.039373 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-03-22 22:41:17.039533 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-03-22 22:41:17.040036 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-03-22 22:41:17.041492 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-03-22 22:41:17.042466 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-03-22 22:41:17.042787 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-03-22 22:41:17.043383 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-03-22 22:41:17.043878 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-03-22 22:41:17.044160 | orchestrator | 2025-03-22 22:41:17.044433 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-22 22:41:17.044780 | orchestrator | Saturday 22 March 2025 22:41:17 +0000 (0:00:03.845) 0:00:35.173 ******** 2025-03-22 22:41:18.270277 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:18.270889 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:18.271414 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:18.274395 | orchestrator | 2025-03-22 22:41:18.277345 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-22 22:41:18.282312 | orchestrator | 2025-03-22 22:41:18.283835 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-22 22:41:18.288497 | orchestrator | Saturday 22 March 2025 22:41:18 +0000 (0:00:01.232) 0:00:36.406 ******** 2025-03-22 22:41:22.287650 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:22.287841 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:22.288272 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:22.288846 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:22.289979 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:22.290733 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:22.291080 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:22.291523 | orchestrator | 2025-03-22 22:41:22.293297 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:41:22.293344 | orchestrator | 2025-03-22 22:41:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:41:22.294127 | orchestrator | 2025-03-22 22:41:22 | INFO  | Please wait and do not abort execution. 2025-03-22 22:41:22.294169 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:41:22.294818 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:41:22.295293 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:41:22.296001 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:41:22.296646 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:41:22.298060 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:41:22.299060 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:41:22.300070 | orchestrator | 2025-03-22 22:41:22.301022 | orchestrator | 2025-03-22 22:41:22.302110 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:41:22.303104 | orchestrator | Saturday 22 March 2025 22:41:22 +0000 (0:00:04.018) 0:00:40.424 ******** 2025-03-22 22:41:22.304008 | orchestrator | =============================================================================== 2025-03-22 22:41:22.304701 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.57s 2025-03-22 22:41:22.305258 | orchestrator | Install required packages (Debian) -------------------------------------- 8.30s 2025-03-22 22:41:22.305579 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.02s 2025-03-22 22:41:22.306120 | orchestrator | Copy fact files --------------------------------------------------------- 3.85s 2025-03-22 22:41:22.306605 | orchestrator | Create custom facts directory ------------------------------------------- 1.50s 2025-03-22 22:41:22.307073 | orchestrator | Copy fact file ---------------------------------------------------------- 1.36s 2025-03-22 22:41:22.307587 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2025-03-22 22:41:22.308548 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.12s 2025-03-22 22:41:22.308913 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2025-03-22 22:41:22.309161 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2025-03-22 22:41:22.309841 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-03-22 22:41:22.310231 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2025-03-22 22:41:22.311564 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.18s 2025-03-22 22:41:22.315828 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-03-22 22:41:22.316622 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-03-22 22:41:22.316735 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-03-22 22:41:22.317625 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-03-22 22:41:22.797577 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-03-22 22:41:22.797678 | orchestrator | + osism apply bootstrap 2025-03-22 22:41:24.511936 | orchestrator | 2025-03-22 22:41:24 | INFO  | Task 5281a5df-2150-4940-bcfa-65625eaf1e50 (bootstrap) was prepared for execution. 2025-03-22 22:41:28.382133 | orchestrator | 2025-03-22 22:41:24 | INFO  | It takes a moment until task 5281a5df-2150-4940-bcfa-65625eaf1e50 (bootstrap) has been started and output is visible here. 2025-03-22 22:41:28.382328 | orchestrator | 2025-03-22 22:41:28.382771 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-03-22 22:41:28.384604 | orchestrator | 2025-03-22 22:41:28.385087 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-03-22 22:41:28.386333 | orchestrator | Saturday 22 March 2025 22:41:28 +0000 (0:00:00.122) 0:00:00.122 ******** 2025-03-22 22:41:28.453390 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:28.483480 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:28.561621 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:28.594630 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:28.676243 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:28.677145 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:28.678281 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:28.682346 | orchestrator | 2025-03-22 22:41:28.683350 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-22 22:41:28.684190 | orchestrator | 2025-03-22 22:41:28.685344 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-22 22:41:28.685649 | orchestrator | Saturday 22 March 2025 22:41:28 +0000 (0:00:00.297) 0:00:00.419 ******** 2025-03-22 22:41:33.388598 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:33.389986 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:33.391004 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:33.391923 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:33.392799 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:33.393752 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:33.394143 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:33.394410 | orchestrator | 2025-03-22 22:41:33.395286 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-03-22 22:41:33.396230 | orchestrator | 2025-03-22 22:41:33.397133 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-22 22:41:33.397704 | orchestrator | Saturday 22 March 2025 22:41:33 +0000 (0:00:04.711) 0:00:05.130 ******** 2025-03-22 22:41:33.463505 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-22 22:41:33.520607 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-22 22:41:33.520982 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-22 22:41:33.556141 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-03-22 22:41:33.556245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-22 22:41:33.556326 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-22 22:41:33.556826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-22 22:41:33.586145 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-22 22:41:33.586720 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-03-22 22:41:33.972688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-22 22:41:33.973329 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-03-22 22:41:33.973704 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-22 22:41:33.974323 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-22 22:41:33.974861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-22 22:41:33.975191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-03-22 22:41:33.978716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-03-22 22:41:33.978866 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-22 22:41:33.979447 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-22 22:41:33.979723 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:33.979747 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-22 22:41:33.979871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-22 22:41:33.980456 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-22 22:41:33.980960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-22 22:41:33.981394 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:41:33.981783 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-22 22:41:33.981976 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-22 22:41:33.982419 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-22 22:41:33.982826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-22 22:41:33.983175 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-22 22:41:33.983658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-22 22:41:33.983962 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-03-22 22:41:33.984592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-22 22:41:33.984943 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-22 22:41:33.985297 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-22 22:41:33.985825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-22 22:41:33.986251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-22 22:41:33.986492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-22 22:41:33.986972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-22 22:41:33.987356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-22 22:41:33.987716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-22 22:41:33.988604 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:33.988688 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-22 22:41:33.989075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-22 22:41:33.989270 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-22 22:41:33.992724 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-22 22:41:33.993008 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-22 22:41:33.993134 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-22 22:41:33.993343 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:41:33.993643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-22 22:41:33.993948 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:33.994230 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-22 22:41:33.994610 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:41:33.995336 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-22 22:41:33.995437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-22 22:41:33.995767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-22 22:41:33.996153 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:33.996393 | orchestrator | 2025-03-22 22:41:33.996623 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-03-22 22:41:33.996974 | orchestrator | 2025-03-22 22:41:33.997283 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-03-22 22:41:33.997468 | orchestrator | Saturday 22 March 2025 22:41:33 +0000 (0:00:00.584) 0:00:05.715 ******** 2025-03-22 22:41:34.073101 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:34.100983 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:34.140336 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:34.170629 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:34.235512 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:34.236446 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:34.237513 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:34.238278 | orchestrator | 2025-03-22 22:41:34.238978 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-03-22 22:41:34.239458 | orchestrator | Saturday 22 March 2025 22:41:34 +0000 (0:00:00.262) 0:00:05.977 ******** 2025-03-22 22:41:35.605529 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:35.605863 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:35.606458 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:35.607317 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:35.607795 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:35.608695 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:35.610200 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:35.610448 | orchestrator | 2025-03-22 22:41:35.611090 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-03-22 22:41:35.612118 | orchestrator | Saturday 22 March 2025 22:41:35 +0000 (0:00:01.369) 0:00:07.346 ******** 2025-03-22 22:41:37.006696 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:37.006893 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:37.010237 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:37.010720 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:37.010750 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:37.011367 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:37.012084 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:37.012887 | orchestrator | 2025-03-22 22:41:37.013360 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-03-22 22:41:37.014006 | orchestrator | Saturday 22 March 2025 22:41:36 +0000 (0:00:01.400) 0:00:08.747 ******** 2025-03-22 22:41:37.326719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:41:37.326912 | orchestrator | 2025-03-22 22:41:37.327799 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-03-22 22:41:37.328818 | orchestrator | Saturday 22 March 2025 22:41:37 +0000 (0:00:00.321) 0:00:09.068 ******** 2025-03-22 22:41:40.016881 | orchestrator | changed: [testbed-manager] 2025-03-22 22:41:40.017338 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:40.018422 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:40.019784 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:40.022093 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:40.023617 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:40.024456 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:40.026536 | orchestrator | 2025-03-22 22:41:40.028332 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-03-22 22:41:40.028375 | orchestrator | Saturday 22 March 2025 22:41:40 +0000 (0:00:02.688) 0:00:11.757 ******** 2025-03-22 22:41:40.120916 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:40.381480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:41:40.382108 | orchestrator | 2025-03-22 22:41:40.382914 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-03-22 22:41:40.383557 | orchestrator | Saturday 22 March 2025 22:41:40 +0000 (0:00:00.366) 0:00:12.123 ******** 2025-03-22 22:41:41.474737 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:41.475403 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:41.476811 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:41.478195 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:41.479367 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:41.480365 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:41.482285 | orchestrator | 2025-03-22 22:41:41.483967 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-03-22 22:41:41.485138 | orchestrator | Saturday 22 March 2025 22:41:41 +0000 (0:00:01.088) 0:00:13.212 ******** 2025-03-22 22:41:41.563140 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:42.175760 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:42.176338 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:42.176375 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:42.176812 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:42.177262 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:42.177730 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:42.178107 | orchestrator | 2025-03-22 22:41:42.178355 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-03-22 22:41:42.178772 | orchestrator | Saturday 22 March 2025 22:41:42 +0000 (0:00:00.704) 0:00:13.917 ******** 2025-03-22 22:41:42.312245 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:41:42.344567 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:41:42.389921 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:41:42.689771 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:42.690918 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:42.692264 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:42.693074 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:42.693616 | orchestrator | 2025-03-22 22:41:42.694484 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-22 22:41:42.695621 | orchestrator | Saturday 22 March 2025 22:41:42 +0000 (0:00:00.513) 0:00:14.430 ******** 2025-03-22 22:41:42.781580 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:42.816247 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:41:42.842769 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:41:42.874719 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:41:42.949513 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:42.950003 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:42.950828 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:42.951618 | orchestrator | 2025-03-22 22:41:42.951696 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-22 22:41:42.952352 | orchestrator | Saturday 22 March 2025 22:41:42 +0000 (0:00:00.261) 0:00:14.691 ******** 2025-03-22 22:41:43.357139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:41:43.357659 | orchestrator | 2025-03-22 22:41:43.362436 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-22 22:41:43.363804 | orchestrator | Saturday 22 March 2025 22:41:43 +0000 (0:00:00.406) 0:00:15.098 ******** 2025-03-22 22:41:43.734323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:41:43.735166 | orchestrator | 2025-03-22 22:41:43.736033 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-22 22:41:43.740277 | orchestrator | Saturday 22 March 2025 22:41:43 +0000 (0:00:00.376) 0:00:15.474 ******** 2025-03-22 22:41:45.351544 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:45.352032 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:45.352510 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:45.353588 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:45.353983 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:45.354806 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:45.355464 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:45.355677 | orchestrator | 2025-03-22 22:41:45.357337 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-22 22:41:45.359011 | orchestrator | Saturday 22 March 2025 22:41:45 +0000 (0:00:01.618) 0:00:17.093 ******** 2025-03-22 22:41:45.452805 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:45.480773 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:41:45.513179 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:41:45.545643 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:41:45.617702 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:45.618990 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:45.619956 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:45.620606 | orchestrator | 2025-03-22 22:41:45.621322 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-22 22:41:45.621732 | orchestrator | Saturday 22 March 2025 22:41:45 +0000 (0:00:00.267) 0:00:17.360 ******** 2025-03-22 22:41:46.226813 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:46.227021 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:46.227049 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:46.227546 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:46.227573 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:46.227939 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:46.228374 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:46.228903 | orchestrator | 2025-03-22 22:41:46.229390 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-22 22:41:46.229878 | orchestrator | Saturday 22 March 2025 22:41:46 +0000 (0:00:00.606) 0:00:17.966 ******** 2025-03-22 22:41:46.330613 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:46.378740 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:41:46.404453 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:41:46.436870 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:41:46.518778 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:46.519276 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:46.519919 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:46.520353 | orchestrator | 2025-03-22 22:41:46.521316 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-22 22:41:46.524042 | orchestrator | Saturday 22 March 2025 22:41:46 +0000 (0:00:00.294) 0:00:18.261 ******** 2025-03-22 22:41:47.159086 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:47.160276 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:47.160689 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:47.161163 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:47.161612 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:47.162162 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:47.162898 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:47.163094 | orchestrator | 2025-03-22 22:41:47.163418 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-22 22:41:47.163725 | orchestrator | Saturday 22 March 2025 22:41:47 +0000 (0:00:00.640) 0:00:18.901 ******** 2025-03-22 22:41:48.408463 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:48.411047 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:48.411078 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:48.411093 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:48.411113 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:48.414134 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:48.415285 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:48.415315 | orchestrator | 2025-03-22 22:41:48.416026 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-22 22:41:48.416800 | orchestrator | Saturday 22 March 2025 22:41:48 +0000 (0:00:01.243) 0:00:20.145 ******** 2025-03-22 22:41:49.873868 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:49.874091 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:49.874121 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:49.875241 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:49.878417 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:49.880122 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:49.880145 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:49.880159 | orchestrator | 2025-03-22 22:41:49.880174 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-22 22:41:49.880246 | orchestrator | Saturday 22 March 2025 22:41:49 +0000 (0:00:01.468) 0:00:21.613 ******** 2025-03-22 22:41:50.244725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:41:50.245999 | orchestrator | 2025-03-22 22:41:50.249452 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-22 22:41:50.353628 | orchestrator | Saturday 22 March 2025 22:41:50 +0000 (0:00:00.372) 0:00:21.986 ******** 2025-03-22 22:41:50.353681 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:51.722692 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:51.723870 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:51.724457 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:51.726995 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:41:51.731275 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:41:51.731820 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:41:51.731847 | orchestrator | 2025-03-22 22:41:51.731863 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-22 22:41:51.731884 | orchestrator | Saturday 22 March 2025 22:41:51 +0000 (0:00:01.477) 0:00:23.463 ******** 2025-03-22 22:41:51.806609 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:51.836340 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:51.871619 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:51.901532 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:51.978526 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:51.979561 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:51.980717 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:51.982132 | orchestrator | 2025-03-22 22:41:51.982551 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-22 22:41:51.983966 | orchestrator | Saturday 22 March 2025 22:41:51 +0000 (0:00:00.256) 0:00:23.720 ******** 2025-03-22 22:41:52.079188 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:52.113567 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:52.142245 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:52.185640 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:52.274951 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:52.275432 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:52.275467 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:52.275886 | orchestrator | 2025-03-22 22:41:52.276302 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-22 22:41:52.276904 | orchestrator | Saturday 22 March 2025 22:41:52 +0000 (0:00:00.296) 0:00:24.017 ******** 2025-03-22 22:41:52.388284 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:52.431559 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:52.461338 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:52.497763 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:52.573240 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:52.574305 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:52.574353 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:52.575472 | orchestrator | 2025-03-22 22:41:52.577154 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-22 22:41:52.577801 | orchestrator | Saturday 22 March 2025 22:41:52 +0000 (0:00:00.297) 0:00:24.315 ******** 2025-03-22 22:41:52.939751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:41:53.593592 | orchestrator | 2025-03-22 22:41:53.593698 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-22 22:41:53.593736 | orchestrator | Saturday 22 March 2025 22:41:52 +0000 (0:00:00.360) 0:00:24.675 ******** 2025-03-22 22:41:53.593768 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:53.594630 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:53.596027 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:53.596929 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:53.597699 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:53.598530 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:53.599649 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:53.600814 | orchestrator | 2025-03-22 22:41:53.601089 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-22 22:41:53.602336 | orchestrator | Saturday 22 March 2025 22:41:53 +0000 (0:00:00.655) 0:00:25.331 ******** 2025-03-22 22:41:53.677284 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:41:53.727617 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:41:53.754098 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:41:53.805548 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:41:53.901146 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:41:53.902482 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:41:53.903627 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:41:53.905051 | orchestrator | 2025-03-22 22:41:53.905615 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-22 22:41:53.906633 | orchestrator | Saturday 22 March 2025 22:41:53 +0000 (0:00:00.311) 0:00:25.643 ******** 2025-03-22 22:41:55.143813 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:55.145260 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:55.145802 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:55.147431 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:55.147763 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:55.148042 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:55.148517 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:55.149082 | orchestrator | 2025-03-22 22:41:55.149397 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-22 22:41:55.149821 | orchestrator | Saturday 22 March 2025 22:41:55 +0000 (0:00:01.238) 0:00:26.881 ******** 2025-03-22 22:41:55.828522 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:55.829146 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:41:55.829186 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:41:55.829738 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:41:55.832575 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:55.832955 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:55.833627 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:55.834298 | orchestrator | 2025-03-22 22:41:55.836390 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-22 22:41:57.046969 | orchestrator | Saturday 22 March 2025 22:41:55 +0000 (0:00:00.687) 0:00:27.569 ******** 2025-03-22 22:41:57.047101 | orchestrator | ok: [testbed-manager] 2025-03-22 22:41:57.047469 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:41:57.048355 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:41:57.049076 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:41:57.049437 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:41:57.049923 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:41:57.052170 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:41:57.052730 | orchestrator | 2025-03-22 22:41:57.052764 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-22 22:41:57.052784 | orchestrator | Saturday 22 March 2025 22:41:57 +0000 (0:00:01.216) 0:00:28.785 ******** 2025-03-22 22:42:11.367352 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:11.367561 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:11.367590 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:11.371341 | orchestrator | changed: [testbed-manager] 2025-03-22 22:42:11.371371 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:42:11.371385 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:42:11.371400 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:42:11.371414 | orchestrator | 2025-03-22 22:42:11.371429 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-03-22 22:42:11.371451 | orchestrator | Saturday 22 March 2025 22:42:11 +0000 (0:00:14.318) 0:00:43.104 ******** 2025-03-22 22:42:11.480941 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:11.508571 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:11.542308 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:11.577613 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:11.659052 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:11.659647 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:11.660692 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:11.667162 | orchestrator | 2025-03-22 22:42:11.763075 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-03-22 22:42:11.763136 | orchestrator | Saturday 22 March 2025 22:42:11 +0000 (0:00:00.297) 0:00:43.401 ******** 2025-03-22 22:42:11.763160 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:11.795779 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:11.823535 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:11.854622 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:11.936851 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:11.937105 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:11.937733 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:11.938328 | orchestrator | 2025-03-22 22:42:11.938981 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-03-22 22:42:11.939073 | orchestrator | Saturday 22 March 2025 22:42:11 +0000 (0:00:00.277) 0:00:43.678 ******** 2025-03-22 22:42:12.058389 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:12.084486 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:12.113346 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:12.144640 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:12.215519 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:12.217476 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:12.218759 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:12.219319 | orchestrator | 2025-03-22 22:42:12.220692 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-03-22 22:42:12.221721 | orchestrator | Saturday 22 March 2025 22:42:12 +0000 (0:00:00.278) 0:00:43.957 ******** 2025-03-22 22:42:12.549692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:42:12.549901 | orchestrator | 2025-03-22 22:42:12.551053 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-03-22 22:42:12.555352 | orchestrator | Saturday 22 March 2025 22:42:12 +0000 (0:00:00.333) 0:00:44.290 ******** 2025-03-22 22:42:14.392544 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:14.393107 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:14.393379 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:14.393676 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:14.394583 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:14.395007 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:14.395102 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:14.395533 | orchestrator | 2025-03-22 22:42:14.396133 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-03-22 22:42:14.396184 | orchestrator | Saturday 22 March 2025 22:42:14 +0000 (0:00:01.843) 0:00:46.134 ******** 2025-03-22 22:42:15.546939 | orchestrator | changed: [testbed-manager] 2025-03-22 22:42:15.547352 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:42:15.547861 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:42:15.548628 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:42:15.549597 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:42:15.552344 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:42:15.555651 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:42:15.555963 | orchestrator | 2025-03-22 22:42:15.555996 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-03-22 22:42:15.556298 | orchestrator | Saturday 22 March 2025 22:42:15 +0000 (0:00:01.154) 0:00:47.288 ******** 2025-03-22 22:42:16.457777 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:16.459015 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:16.459356 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:16.460315 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:16.460860 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:16.461526 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:16.462010 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:16.462650 | orchestrator | 2025-03-22 22:42:16.465780 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-03-22 22:42:16.466059 | orchestrator | Saturday 22 March 2025 22:42:16 +0000 (0:00:00.911) 0:00:48.200 ******** 2025-03-22 22:42:16.827487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:42:16.827631 | orchestrator | 2025-03-22 22:42:16.828859 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-03-22 22:42:16.829413 | orchestrator | Saturday 22 March 2025 22:42:16 +0000 (0:00:00.367) 0:00:48.568 ******** 2025-03-22 22:42:17.956845 | orchestrator | changed: [testbed-manager] 2025-03-22 22:42:17.960196 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:42:17.960658 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:42:18.047425 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:42:18.047513 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:42:18.047529 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:42:18.047543 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:42:18.047557 | orchestrator | 2025-03-22 22:42:18.047572 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-03-22 22:42:18.047587 | orchestrator | Saturday 22 March 2025 22:42:17 +0000 (0:00:01.129) 0:00:49.697 ******** 2025-03-22 22:42:18.047635 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:42:18.084781 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:42:18.118419 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:42:18.149653 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:42:18.338877 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:42:18.342389 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:42:18.342858 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:42:18.342883 | orchestrator | 2025-03-22 22:42:18.342896 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-03-22 22:42:18.342915 | orchestrator | Saturday 22 March 2025 22:42:18 +0000 (0:00:00.382) 0:00:50.079 ******** 2025-03-22 22:42:31.330319 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:42:31.330571 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:42:31.330610 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:42:31.330645 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:42:31.334173 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:42:31.334635 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:42:31.335136 | orchestrator | changed: [testbed-manager] 2025-03-22 22:42:31.335665 | orchestrator | 2025-03-22 22:42:31.336395 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-03-22 22:42:31.336820 | orchestrator | Saturday 22 March 2025 22:42:31 +0000 (0:00:12.985) 0:01:03.065 ******** 2025-03-22 22:42:32.702992 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:32.703144 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:32.703823 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:32.704269 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:32.704765 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:32.705634 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:32.709325 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:33.667160 | orchestrator | 2025-03-22 22:42:33.667303 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-03-22 22:42:33.667325 | orchestrator | Saturday 22 March 2025 22:42:32 +0000 (0:00:01.378) 0:01:04.443 ******** 2025-03-22 22:42:33.667357 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:33.667877 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:33.667911 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:33.668187 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:33.668691 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:33.669125 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:33.669802 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:33.670464 | orchestrator | 2025-03-22 22:42:33.671065 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-03-22 22:42:33.671405 | orchestrator | Saturday 22 March 2025 22:42:33 +0000 (0:00:00.963) 0:01:05.407 ******** 2025-03-22 22:42:33.751479 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:33.783285 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:33.811846 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:33.858917 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:33.935648 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:33.936330 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:33.937661 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:33.938927 | orchestrator | 2025-03-22 22:42:33.939658 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-03-22 22:42:33.940141 | orchestrator | Saturday 22 March 2025 22:42:33 +0000 (0:00:00.267) 0:01:05.675 ******** 2025-03-22 22:42:34.023601 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:34.063993 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:34.097585 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:34.133825 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:34.210987 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:34.211277 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:34.211850 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:34.212196 | orchestrator | 2025-03-22 22:42:34.213156 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-03-22 22:42:34.213382 | orchestrator | Saturday 22 March 2025 22:42:34 +0000 (0:00:00.277) 0:01:05.952 ******** 2025-03-22 22:42:34.576266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:42:34.578410 | orchestrator | 2025-03-22 22:42:34.579330 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-03-22 22:42:34.580474 | orchestrator | Saturday 22 March 2025 22:42:34 +0000 (0:00:00.364) 0:01:06.316 ******** 2025-03-22 22:42:36.662237 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:36.662504 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:36.663478 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:36.664826 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:36.665037 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:36.665757 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:36.666848 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:36.667354 | orchestrator | 2025-03-22 22:42:36.667627 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-03-22 22:42:36.668349 | orchestrator | Saturday 22 March 2025 22:42:36 +0000 (0:00:02.083) 0:01:08.400 ******** 2025-03-22 22:42:37.279570 | orchestrator | changed: [testbed-manager] 2025-03-22 22:42:37.279801 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:42:37.280647 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:42:37.280675 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:42:37.280697 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:42:37.280826 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:42:37.280851 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:42:37.281405 | orchestrator | 2025-03-22 22:42:37.282309 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-03-22 22:42:37.282343 | orchestrator | Saturday 22 March 2025 22:42:37 +0000 (0:00:00.620) 0:01:09.021 ******** 2025-03-22 22:42:37.356937 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:37.381297 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:37.403280 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:37.428455 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:37.493258 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:37.493359 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:37.494106 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:37.495002 | orchestrator | 2025-03-22 22:42:37.495938 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-03-22 22:42:37.496542 | orchestrator | Saturday 22 March 2025 22:42:37 +0000 (0:00:00.212) 0:01:09.234 ******** 2025-03-22 22:42:38.716708 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:38.717270 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:38.717469 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:38.717887 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:38.718663 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:38.718854 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:38.718883 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:38.719990 | orchestrator | 2025-03-22 22:42:38.720686 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-03-22 22:42:38.721274 | orchestrator | Saturday 22 March 2025 22:42:38 +0000 (0:00:01.222) 0:01:10.456 ******** 2025-03-22 22:42:40.678973 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:42:40.679118 | orchestrator | changed: [testbed-manager] 2025-03-22 22:42:40.679143 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:42:40.679748 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:42:40.680634 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:42:40.681809 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:42:40.683028 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:42:40.684202 | orchestrator | 2025-03-22 22:42:40.684532 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-03-22 22:42:40.685734 | orchestrator | Saturday 22 March 2025 22:42:40 +0000 (0:00:01.960) 0:01:12.417 ******** 2025-03-22 22:42:50.490686 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:42:50.490861 | orchestrator | ok: [testbed-manager] 2025-03-22 22:42:50.490889 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:42:50.491831 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:42:50.493751 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:42:50.493818 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:42:50.496734 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:42:50.501071 | orchestrator | 2025-03-22 22:42:50.507760 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-03-22 22:42:50.507799 | orchestrator | Saturday 22 March 2025 22:42:50 +0000 (0:00:09.811) 0:01:22.229 ******** 2025-03-22 22:43:23.058381 | orchestrator | ok: [testbed-manager] 2025-03-22 22:43:23.059805 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:43:23.059860 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:43:23.059883 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:43:23.060836 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:43:23.061450 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:43:23.062117 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:43:23.063035 | orchestrator | 2025-03-22 22:43:23.063427 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-03-22 22:43:23.064577 | orchestrator | Saturday 22 March 2025 22:43:23 +0000 (0:00:32.562) 0:01:54.791 ******** 2025-03-22 22:44:37.854259 | orchestrator | changed: [testbed-manager] 2025-03-22 22:44:37.854478 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:44:37.854502 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:44:37.854517 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:44:37.854531 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:44:37.854552 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:44:37.855439 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:44:37.855472 | orchestrator | 2025-03-22 22:44:37.856507 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-03-22 22:44:37.857238 | orchestrator | Saturday 22 March 2025 22:44:37 +0000 (0:01:14.794) 0:03:09.586 ******** 2025-03-22 22:44:39.870568 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:44:39.871145 | orchestrator | ok: [testbed-manager] 2025-03-22 22:44:39.871193 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:44:39.871843 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:44:39.872481 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:44:39.873428 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:44:39.873730 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:44:39.873811 | orchestrator | 2025-03-22 22:44:39.873881 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-03-22 22:44:39.875565 | orchestrator | Saturday 22 March 2025 22:44:39 +0000 (0:00:02.022) 0:03:11.609 ******** 2025-03-22 22:44:52.920550 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:44:52.920729 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:44:52.920752 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:44:52.920765 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:44:52.920783 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:44:52.921296 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:44:52.921727 | orchestrator | changed: [testbed-manager] 2025-03-22 22:44:52.922424 | orchestrator | 2025-03-22 22:44:52.922905 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-03-22 22:44:52.923926 | orchestrator | Saturday 22 March 2025 22:44:52 +0000 (0:00:13.045) 0:03:24.655 ******** 2025-03-22 22:44:53.379624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-03-22 22:44:53.380323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-03-22 22:44:53.381620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-03-22 22:44:53.383228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-03-22 22:44:53.383712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-03-22 22:44:53.385507 | orchestrator | 2025-03-22 22:44:53.386366 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-03-22 22:44:53.387136 | orchestrator | Saturday 22 March 2025 22:44:53 +0000 (0:00:00.465) 0:03:25.120 ******** 2025-03-22 22:44:53.451070 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-22 22:44:53.486126 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:44:53.586831 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-22 22:44:53.587638 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-22 22:44:55.107008 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:44:55.107891 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:44:55.108173 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-22 22:44:55.108263 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:44:55.108826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-22 22:44:55.110094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-22 22:44:55.110295 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-22 22:44:55.110685 | orchestrator | 2025-03-22 22:44:55.110984 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-03-22 22:44:55.111301 | orchestrator | Saturday 22 March 2025 22:44:55 +0000 (0:00:01.727) 0:03:26.848 ******** 2025-03-22 22:44:55.150113 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-22 22:44:55.195961 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-22 22:44:55.195999 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-22 22:44:55.197418 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-22 22:44:55.198537 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-22 22:44:55.198568 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-22 22:44:55.202109 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-22 22:44:55.229864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-22 22:44:55.229905 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-22 22:44:55.229921 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-22 22:44:55.229946 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:44:55.341444 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-22 22:44:55.342055 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-22 22:44:55.342087 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-22 22:44:55.342100 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-22 22:44:55.342114 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-22 22:44:55.342133 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-22 22:44:55.342305 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-22 22:44:55.343187 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-22 22:44:55.343804 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-22 22:44:55.344918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-22 22:44:55.345593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-22 22:44:55.346872 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-22 22:44:55.347343 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-22 22:44:55.347369 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-22 22:45:02.067439 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:45:02.067987 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-22 22:45:02.068608 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-22 22:45:02.069912 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-22 22:45:02.071149 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-22 22:45:02.071591 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-22 22:45:02.072463 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-22 22:45:02.073326 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:45:02.073649 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-22 22:45:02.074817 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-22 22:45:02.075355 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-22 22:45:02.075930 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-22 22:45:02.077186 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-22 22:45:02.077660 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-22 22:45:02.078632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-22 22:45:02.079603 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-22 22:45:02.080626 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-22 22:45:02.081412 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-22 22:45:02.083028 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:45:02.083114 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-22 22:45:02.083413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-22 22:45:02.084323 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-22 22:45:02.084663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-22 22:45:02.085079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-22 22:45:02.085945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-22 22:45:02.086347 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-22 22:45:02.086710 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-22 22:45:02.087277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-22 22:45:02.087762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-22 22:45:02.088308 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-22 22:45:02.088756 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-22 22:45:02.089707 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-22 22:45:02.089962 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-22 22:45:02.091786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-22 22:45:02.093607 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-22 22:45:02.093660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-22 22:45:02.093675 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-22 22:45:02.093689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-22 22:45:02.093709 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-22 22:45:02.095451 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-22 22:45:02.095682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-22 22:45:02.096500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-22 22:45:02.096668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-22 22:45:02.096869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-22 22:45:02.097226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-22 22:45:02.097611 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-22 22:45:02.097963 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-22 22:45:02.099482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-22 22:45:02.099583 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-22 22:45:02.099979 | orchestrator | 2025-03-22 22:45:02.100086 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-03-22 22:45:02.100376 | orchestrator | Saturday 22 March 2025 22:45:02 +0000 (0:00:06.959) 0:03:33.807 ******** 2025-03-22 22:45:03.684425 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.688019 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.688739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.689598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.690360 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.693278 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.693914 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-22 22:45:03.693954 | orchestrator | 2025-03-22 22:45:03.694549 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-03-22 22:45:03.695301 | orchestrator | Saturday 22 March 2025 22:45:03 +0000 (0:00:01.618) 0:03:35.425 ******** 2025-03-22 22:45:03.750775 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-22 22:45:03.781319 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-22 22:45:03.781368 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:03.820997 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:03.821838 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-22 22:45:03.822367 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-22 22:45:03.852756 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:03.878648 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:04.460158 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-22 22:45:04.460387 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-22 22:45:04.460439 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-22 22:45:04.460460 | orchestrator | 2025-03-22 22:45:04.460987 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-03-22 22:45:04.461019 | orchestrator | Saturday 22 March 2025 22:45:04 +0000 (0:00:00.775) 0:03:36.200 ******** 2025-03-22 22:45:04.535070 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-22 22:45:04.535485 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-22 22:45:04.569489 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:04.608295 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:04.609426 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-22 22:45:04.612349 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-22 22:45:04.643506 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:04.677355 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:07.298375 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-22 22:45:07.298529 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-22 22:45:07.299160 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-22 22:45:07.299283 | orchestrator | 2025-03-22 22:45:07.300121 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-03-22 22:45:07.301742 | orchestrator | Saturday 22 March 2025 22:45:07 +0000 (0:00:02.838) 0:03:39.039 ******** 2025-03-22 22:45:07.364979 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:07.412150 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:07.443483 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:07.481130 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:07.513853 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:45:07.663676 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:45:07.664812 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:45:07.665569 | orchestrator | 2025-03-22 22:45:07.665834 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-03-22 22:45:07.666468 | orchestrator | Saturday 22 March 2025 22:45:07 +0000 (0:00:00.366) 0:03:39.405 ******** 2025-03-22 22:45:13.014787 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:13.015777 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:13.016425 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:13.017185 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:13.018751 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:13.019309 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:13.019836 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:13.020451 | orchestrator | 2025-03-22 22:45:13.020559 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-03-22 22:45:13.021015 | orchestrator | Saturday 22 March 2025 22:45:13 +0000 (0:00:05.350) 0:03:44.755 ******** 2025-03-22 22:45:13.105307 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-03-22 22:45:13.170386 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-03-22 22:45:13.170434 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:13.171328 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-03-22 22:45:13.214306 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:13.273659 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-03-22 22:45:13.273698 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:13.274336 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-03-22 22:45:13.315943 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:13.316342 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-03-22 22:45:13.385930 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:45:13.387152 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:45:13.387787 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-03-22 22:45:13.389558 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:45:13.389826 | orchestrator | 2025-03-22 22:45:13.390361 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-03-22 22:45:13.390582 | orchestrator | Saturday 22 March 2025 22:45:13 +0000 (0:00:00.372) 0:03:45.127 ******** 2025-03-22 22:45:14.644800 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-03-22 22:45:14.644949 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-03-22 22:45:14.645289 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-03-22 22:45:14.646126 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-03-22 22:45:14.646546 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-03-22 22:45:14.647313 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-03-22 22:45:14.647946 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-03-22 22:45:14.648107 | orchestrator | 2025-03-22 22:45:14.648443 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-03-22 22:45:14.648801 | orchestrator | Saturday 22 March 2025 22:45:14 +0000 (0:00:01.256) 0:03:46.383 ******** 2025-03-22 22:45:15.246809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:45:15.247587 | orchestrator | 2025-03-22 22:45:15.248467 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-03-22 22:45:15.252277 | orchestrator | Saturday 22 March 2025 22:45:15 +0000 (0:00:00.605) 0:03:46.989 ******** 2025-03-22 22:45:16.787646 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:16.790848 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:16.792382 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:16.792519 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:16.793019 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:16.793113 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:16.794126 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:17.460286 | orchestrator | 2025-03-22 22:45:17.460399 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-03-22 22:45:17.460418 | orchestrator | Saturday 22 March 2025 22:45:16 +0000 (0:00:01.535) 0:03:48.525 ******** 2025-03-22 22:45:17.460448 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:17.461158 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:17.461974 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:17.462307 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:17.463121 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:17.463900 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:17.464586 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:17.468031 | orchestrator | 2025-03-22 22:45:17.468063 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-03-22 22:45:18.189606 | orchestrator | Saturday 22 March 2025 22:45:17 +0000 (0:00:00.678) 0:03:49.203 ******** 2025-03-22 22:45:18.190432 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:18.190538 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:18.190558 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:18.190604 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:18.190624 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:18.190678 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:18.190697 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:18.190956 | orchestrator | 2025-03-22 22:45:18.192523 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-03-22 22:45:18.193414 | orchestrator | Saturday 22 March 2025 22:45:18 +0000 (0:00:00.726) 0:03:49.930 ******** 2025-03-22 22:45:18.853545 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:18.853782 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:18.853809 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:18.853828 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:18.854643 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:18.855401 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:18.856030 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:18.858988 | orchestrator | 2025-03-22 22:45:19.975953 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-03-22 22:45:19.976050 | orchestrator | Saturday 22 March 2025 22:45:18 +0000 (0:00:00.665) 0:03:50.595 ******** 2025-03-22 22:45:19.976080 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681648.8192139, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.976145 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681640.8185973, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.977699 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681637.0747664, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.977859 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681649.4915645, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.980038 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681640.8888516, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.980791 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681638.7748134, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.981618 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1742681642.5923913, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.982119 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681670.5035934, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.983419 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681588.2076745, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.984550 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681575.966531, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.985511 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681585.1260645, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.986639 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681576.5497274, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.989076 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681576.5381181, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.990514 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1742681575.715355, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-22 22:45:19.991302 | orchestrator | 2025-03-22 22:45:19.992183 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-03-22 22:45:19.992572 | orchestrator | Saturday 22 March 2025 22:45:19 +0000 (0:00:01.121) 0:03:51.717 ******** 2025-03-22 22:45:21.281109 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:21.281500 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:21.281971 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:21.285019 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:21.285507 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:21.286860 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:22.641898 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:22.642006 | orchestrator | 2025-03-22 22:45:22.642109 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-03-22 22:45:22.642128 | orchestrator | Saturday 22 March 2025 22:45:21 +0000 (0:00:01.303) 0:03:53.020 ******** 2025-03-22 22:45:22.642159 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:22.644461 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:22.645795 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:22.645824 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:22.646826 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:22.647986 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:22.648016 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:22.648272 | orchestrator | 2025-03-22 22:45:22.649230 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-03-22 22:45:22.650297 | orchestrator | Saturday 22 March 2025 22:45:22 +0000 (0:00:01.363) 0:03:54.383 ******** 2025-03-22 22:45:23.920577 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:23.923898 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:23.926280 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:23.926946 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:23.928767 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:23.929464 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:23.930279 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:23.931309 | orchestrator | 2025-03-22 22:45:23.931699 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-03-22 22:45:23.932527 | orchestrator | Saturday 22 March 2025 22:45:23 +0000 (0:00:01.275) 0:03:55.659 ******** 2025-03-22 22:45:24.000735 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:24.044173 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:24.096876 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:24.149991 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:24.212953 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:45:24.283586 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:45:24.284194 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:45:24.284924 | orchestrator | 2025-03-22 22:45:24.284958 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-03-22 22:45:24.285389 | orchestrator | Saturday 22 March 2025 22:45:24 +0000 (0:00:00.366) 0:03:56.026 ******** 2025-03-22 22:45:25.232929 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:25.233174 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:25.234091 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:25.234458 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:25.234986 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:25.236426 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:25.237474 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:25.238059 | orchestrator | 2025-03-22 22:45:25.239609 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-03-22 22:45:25.243340 | orchestrator | Saturday 22 March 2025 22:45:25 +0000 (0:00:00.945) 0:03:56.971 ******** 2025-03-22 22:45:25.713737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:45:25.713942 | orchestrator | 2025-03-22 22:45:25.714803 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-03-22 22:45:25.715375 | orchestrator | Saturday 22 March 2025 22:45:25 +0000 (0:00:00.484) 0:03:57.455 ******** 2025-03-22 22:45:33.989506 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:33.990455 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:33.991459 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:33.991492 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:33.991514 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:33.994710 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:33.995016 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:33.995269 | orchestrator | 2025-03-22 22:45:33.995537 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-03-22 22:45:33.995754 | orchestrator | Saturday 22 March 2025 22:45:33 +0000 (0:00:08.273) 0:04:05.728 ******** 2025-03-22 22:45:35.404409 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:35.404560 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:35.405807 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:35.406228 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:35.406871 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:35.407572 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:35.408763 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:35.409072 | orchestrator | 2025-03-22 22:45:35.409834 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-03-22 22:45:35.410678 | orchestrator | Saturday 22 March 2025 22:45:35 +0000 (0:00:01.412) 0:04:07.141 ******** 2025-03-22 22:45:36.492166 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:36.492966 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:36.493006 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:36.493177 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:36.494459 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:36.494709 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:36.495555 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:36.496372 | orchestrator | 2025-03-22 22:45:36.497346 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-03-22 22:45:36.497733 | orchestrator | Saturday 22 March 2025 22:45:36 +0000 (0:00:01.091) 0:04:08.233 ******** 2025-03-22 22:45:37.125081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:45:37.125263 | orchestrator | 2025-03-22 22:45:37.125549 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-03-22 22:45:37.126299 | orchestrator | Saturday 22 March 2025 22:45:37 +0000 (0:00:00.632) 0:04:08.865 ******** 2025-03-22 22:45:46.373525 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:46.375585 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:46.375654 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:46.377445 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:46.378324 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:46.378729 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:46.379335 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:46.379914 | orchestrator | 2025-03-22 22:45:46.380414 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-03-22 22:45:46.382744 | orchestrator | Saturday 22 March 2025 22:45:46 +0000 (0:00:09.244) 0:04:18.110 ******** 2025-03-22 22:45:47.108112 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:47.108936 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:47.110006 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:47.110304 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:47.110846 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:47.112019 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:47.113282 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:47.113480 | orchestrator | 2025-03-22 22:45:47.113506 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-03-22 22:45:47.113528 | orchestrator | Saturday 22 March 2025 22:45:47 +0000 (0:00:00.739) 0:04:18.849 ******** 2025-03-22 22:45:48.388282 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:48.388989 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:48.389029 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:48.389237 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:48.389265 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:48.389601 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:48.390095 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:48.390435 | orchestrator | 2025-03-22 22:45:48.390891 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-03-22 22:45:48.392771 | orchestrator | Saturday 22 March 2025 22:45:48 +0000 (0:00:01.277) 0:04:20.126 ******** 2025-03-22 22:45:49.501519 | orchestrator | changed: [testbed-manager] 2025-03-22 22:45:49.502712 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:45:49.504494 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:45:49.505358 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:45:49.506432 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:45:49.507402 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:45:49.508105 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:45:49.508787 | orchestrator | 2025-03-22 22:45:49.509449 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-03-22 22:45:49.510188 | orchestrator | Saturday 22 March 2025 22:45:49 +0000 (0:00:01.115) 0:04:21.241 ******** 2025-03-22 22:45:49.652161 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:49.688775 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:49.723637 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:49.767122 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:49.838988 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:49.840529 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:49.842680 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:49.844171 | orchestrator | 2025-03-22 22:45:49.845111 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-03-22 22:45:49.846397 | orchestrator | Saturday 22 March 2025 22:45:49 +0000 (0:00:00.336) 0:04:21.578 ******** 2025-03-22 22:45:49.961237 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:50.014969 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:50.057470 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:50.103241 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:50.190845 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:50.191759 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:50.192533 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:50.193262 | orchestrator | 2025-03-22 22:45:50.194063 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-03-22 22:45:50.289772 | orchestrator | Saturday 22 March 2025 22:45:50 +0000 (0:00:00.354) 0:04:21.933 ******** 2025-03-22 22:45:50.289844 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:50.387795 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:50.431802 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:50.475192 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:50.554351 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:50.555734 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:50.555769 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:50.555843 | orchestrator | 2025-03-22 22:45:50.555956 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-03-22 22:45:50.556387 | orchestrator | Saturday 22 March 2025 22:45:50 +0000 (0:00:00.363) 0:04:22.296 ******** 2025-03-22 22:45:55.538833 | orchestrator | ok: [testbed-manager] 2025-03-22 22:45:55.539015 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:45:55.539054 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:45:55.539485 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:45:55.540336 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:45:55.540438 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:45:55.540892 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:45:55.541361 | orchestrator | 2025-03-22 22:45:55.541627 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-03-22 22:45:55.542983 | orchestrator | Saturday 22 March 2025 22:45:55 +0000 (0:00:04.983) 0:04:27.280 ******** 2025-03-22 22:45:56.074644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:45:56.075004 | orchestrator | 2025-03-22 22:45:56.075060 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-03-22 22:45:56.075470 | orchestrator | Saturday 22 March 2025 22:45:56 +0000 (0:00:00.534) 0:04:27.815 ******** 2025-03-22 22:45:56.126259 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.170265 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-03-22 22:45:56.170354 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.170949 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-03-22 22:45:56.219608 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:56.219730 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.223083 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-03-22 22:45:56.262579 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:56.263278 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.314195 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-03-22 22:45:56.369884 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:56.369949 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.369975 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:56.370119 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-03-22 22:45:56.370900 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.371710 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-03-22 22:45:56.462664 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:45:56.463579 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:45:56.464071 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-03-22 22:45:56.464775 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-03-22 22:45:56.465336 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:45:56.465537 | orchestrator | 2025-03-22 22:45:56.466171 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-03-22 22:45:56.467116 | orchestrator | Saturday 22 March 2025 22:45:56 +0000 (0:00:00.389) 0:04:28.205 ******** 2025-03-22 22:45:56.948924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:45:56.949252 | orchestrator | 2025-03-22 22:45:56.949526 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-03-22 22:45:56.951096 | orchestrator | Saturday 22 March 2025 22:45:56 +0000 (0:00:00.484) 0:04:28.689 ******** 2025-03-22 22:45:57.033513 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-03-22 22:45:57.082309 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-03-22 22:45:57.082353 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:45:57.124384 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-03-22 22:45:57.124467 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:45:57.124974 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-03-22 22:45:57.165738 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:45:57.165840 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-03-22 22:45:57.205392 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:45:57.206158 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-03-22 22:45:57.294514 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:45:57.295815 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:45:57.297065 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-03-22 22:45:57.298196 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:45:57.298925 | orchestrator | 2025-03-22 22:45:57.300222 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-03-22 22:45:57.300844 | orchestrator | Saturday 22 March 2025 22:45:57 +0000 (0:00:00.347) 0:04:29.037 ******** 2025-03-22 22:45:57.989493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:45:57.989655 | orchestrator | 2025-03-22 22:45:57.990315 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-03-22 22:45:57.991020 | orchestrator | Saturday 22 March 2025 22:45:57 +0000 (0:00:00.693) 0:04:29.730 ******** 2025-03-22 22:46:33.074509 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:46:33.075385 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:46:33.075423 | orchestrator | changed: [testbed-manager] 2025-03-22 22:46:33.075438 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:46:33.075451 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:46:33.075465 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:46:33.075487 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:46:33.075821 | orchestrator | 2025-03-22 22:46:33.076585 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-03-22 22:46:33.076797 | orchestrator | Saturday 22 March 2025 22:46:33 +0000 (0:00:35.078) 0:05:04.809 ******** 2025-03-22 22:46:42.020012 | orchestrator | changed: [testbed-manager] 2025-03-22 22:46:42.020193 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:46:42.020406 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:46:42.020801 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:46:42.021509 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:46:42.023921 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:46:42.024598 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:46:42.025308 | orchestrator | 2025-03-22 22:46:42.025408 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-03-22 22:46:42.026143 | orchestrator | Saturday 22 March 2025 22:46:42 +0000 (0:00:08.952) 0:05:13.761 ******** 2025-03-22 22:46:50.771775 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:46:50.771942 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:46:50.772515 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:46:50.772851 | orchestrator | changed: [testbed-manager] 2025-03-22 22:46:50.772882 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:46:50.773316 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:46:50.776303 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:46:50.776576 | orchestrator | 2025-03-22 22:46:50.776606 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-03-22 22:46:50.776683 | orchestrator | Saturday 22 March 2025 22:46:50 +0000 (0:00:08.751) 0:05:22.513 ******** 2025-03-22 22:46:52.624496 | orchestrator | ok: [testbed-manager] 2025-03-22 22:46:52.628252 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:46:52.629518 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:46:52.630078 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:46:52.630822 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:46:52.633347 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:46:52.634985 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:46:52.635018 | orchestrator | 2025-03-22 22:46:52.635048 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-03-22 22:46:52.635088 | orchestrator | Saturday 22 March 2025 22:46:52 +0000 (0:00:01.849) 0:05:24.362 ******** 2025-03-22 22:46:59.406322 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:46:59.406552 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:46:59.407542 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:46:59.407571 | orchestrator | changed: [testbed-manager] 2025-03-22 22:46:59.407612 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:46:59.408336 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:46:59.408570 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:46:59.408674 | orchestrator | 2025-03-22 22:46:59.409223 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-03-22 22:46:59.409322 | orchestrator | Saturday 22 March 2025 22:46:59 +0000 (0:00:06.785) 0:05:31.147 ******** 2025-03-22 22:46:59.925908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:46:59.926851 | orchestrator | 2025-03-22 22:46:59.926916 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-03-22 22:46:59.928141 | orchestrator | Saturday 22 March 2025 22:46:59 +0000 (0:00:00.519) 0:05:31.667 ******** 2025-03-22 22:47:00.788335 | orchestrator | changed: [testbed-manager] 2025-03-22 22:47:00.788912 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:00.788968 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:00.790142 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:00.790482 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:00.792625 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:00.793628 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:00.794987 | orchestrator | 2025-03-22 22:47:00.796105 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-03-22 22:47:00.796665 | orchestrator | Saturday 22 March 2025 22:47:00 +0000 (0:00:00.861) 0:05:32.528 ******** 2025-03-22 22:47:02.718810 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:02.719465 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:47:02.721386 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:47:02.722477 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:47:02.724494 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:47:02.725558 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:47:02.726256 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:47:02.726962 | orchestrator | 2025-03-22 22:47:02.728056 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-03-22 22:47:02.729165 | orchestrator | Saturday 22 March 2025 22:47:02 +0000 (0:00:01.931) 0:05:34.460 ******** 2025-03-22 22:47:03.572340 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:03.573341 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:03.574747 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:03.576393 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:03.577049 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:03.579033 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:03.580289 | orchestrator | changed: [testbed-manager] 2025-03-22 22:47:03.581280 | orchestrator | 2025-03-22 22:47:03.582408 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-03-22 22:47:03.583594 | orchestrator | Saturday 22 March 2025 22:47:03 +0000 (0:00:00.850) 0:05:35.311 ******** 2025-03-22 22:47:03.686487 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:47:03.723102 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:03.771956 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:03.812138 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:03.889801 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:03.893384 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:03.894374 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:03.895324 | orchestrator | 2025-03-22 22:47:03.896282 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-03-22 22:47:03.897076 | orchestrator | Saturday 22 March 2025 22:47:03 +0000 (0:00:00.321) 0:05:35.632 ******** 2025-03-22 22:47:03.991579 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:47:04.040380 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:04.085029 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:04.139297 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:04.181496 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:04.432473 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:04.434330 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:04.435245 | orchestrator | 2025-03-22 22:47:04.435972 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-03-22 22:47:04.436496 | orchestrator | Saturday 22 March 2025 22:47:04 +0000 (0:00:00.541) 0:05:36.173 ******** 2025-03-22 22:47:04.518320 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:04.568321 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:47:04.607716 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:47:04.695554 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:47:04.774378 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:47:04.774482 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:47:04.775666 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:47:04.777072 | orchestrator | 2025-03-22 22:47:04.777744 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-03-22 22:47:04.778144 | orchestrator | Saturday 22 March 2025 22:47:04 +0000 (0:00:00.341) 0:05:36.515 ******** 2025-03-22 22:47:04.897619 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:47:04.935906 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:04.992274 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:05.041956 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:05.133005 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:05.133128 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:05.133893 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:05.135575 | orchestrator | 2025-03-22 22:47:05.135611 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-03-22 22:47:05.136046 | orchestrator | Saturday 22 March 2025 22:47:05 +0000 (0:00:00.357) 0:05:36.872 ******** 2025-03-22 22:47:05.280563 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:05.319649 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:47:05.370722 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:47:05.417605 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:47:05.503454 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:47:05.504109 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:47:05.505121 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:47:05.505928 | orchestrator | 2025-03-22 22:47:05.506458 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-03-22 22:47:05.507140 | orchestrator | Saturday 22 March 2025 22:47:05 +0000 (0:00:00.373) 0:05:37.246 ******** 2025-03-22 22:47:05.585174 | orchestrator | ok: [testbed-manager] =>  2025-03-22 22:47:05.585381 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:05.656851 | orchestrator | ok: [testbed-node-0] =>  2025-03-22 22:47:05.657177 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:05.827573 | orchestrator | ok: [testbed-node-1] =>  2025-03-22 22:47:05.828412 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:05.877260 | orchestrator | ok: [testbed-node-2] =>  2025-03-22 22:47:05.877599 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:05.918402 | orchestrator | ok: [testbed-node-3] =>  2025-03-22 22:47:05.922193 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:05.998849 | orchestrator | ok: [testbed-node-4] =>  2025-03-22 22:47:05.999480 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:06.000845 | orchestrator | ok: [testbed-node-5] =>  2025-03-22 22:47:06.001931 | orchestrator |  docker_version: 5:27.5.1 2025-03-22 22:47:06.002449 | orchestrator | 2025-03-22 22:47:06.003463 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-03-22 22:47:06.004468 | orchestrator | Saturday 22 March 2025 22:47:05 +0000 (0:00:00.494) 0:05:37.741 ******** 2025-03-22 22:47:06.082687 | orchestrator | ok: [testbed-manager] =>  2025-03-22 22:47:06.082849 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.125862 | orchestrator | ok: [testbed-node-0] =>  2025-03-22 22:47:06.126554 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.165356 | orchestrator | ok: [testbed-node-1] =>  2025-03-22 22:47:06.165999 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.245388 | orchestrator | ok: [testbed-node-2] =>  2025-03-22 22:47:06.246437 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.329022 | orchestrator | ok: [testbed-node-3] =>  2025-03-22 22:47:06.330266 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.330462 | orchestrator | ok: [testbed-node-4] =>  2025-03-22 22:47:06.333918 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.334182 | orchestrator | ok: [testbed-node-5] =>  2025-03-22 22:47:06.334519 | orchestrator |  docker_cli_version: 5:27.5.1 2025-03-22 22:47:06.334543 | orchestrator | 2025-03-22 22:47:06.334562 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-03-22 22:47:06.335276 | orchestrator | Saturday 22 March 2025 22:47:06 +0000 (0:00:00.330) 0:05:38.071 ******** 2025-03-22 22:47:06.430872 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:47:06.482365 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:06.520052 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:06.558484 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:06.599283 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:06.673891 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:06.675425 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:06.675516 | orchestrator | 2025-03-22 22:47:06.676068 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-03-22 22:47:06.676951 | orchestrator | Saturday 22 March 2025 22:47:06 +0000 (0:00:00.343) 0:05:38.414 ******** 2025-03-22 22:47:06.749616 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:47:06.790294 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:06.869312 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:06.903903 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:06.992531 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:06.993320 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:06.994416 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:06.994870 | orchestrator | 2025-03-22 22:47:06.998834 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-03-22 22:47:06.998918 | orchestrator | Saturday 22 March 2025 22:47:06 +0000 (0:00:00.319) 0:05:38.734 ******** 2025-03-22 22:47:07.539706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:47:07.539879 | orchestrator | 2025-03-22 22:47:07.540388 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-03-22 22:47:07.541153 | orchestrator | Saturday 22 March 2025 22:47:07 +0000 (0:00:00.546) 0:05:39.281 ******** 2025-03-22 22:47:08.513761 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:47:08.514165 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:47:08.514995 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:47:08.515135 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:08.517289 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:47:08.518223 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:47:08.519508 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:47:08.520452 | orchestrator | 2025-03-22 22:47:08.520906 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-03-22 22:47:08.521754 | orchestrator | Saturday 22 March 2025 22:47:08 +0000 (0:00:00.973) 0:05:40.255 ******** 2025-03-22 22:47:11.626468 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:47:11.626700 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:47:11.628059 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:47:11.628531 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:47:11.629495 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:47:11.630359 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:11.631585 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:47:11.635262 | orchestrator | 2025-03-22 22:47:11.730314 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-03-22 22:47:11.730376 | orchestrator | Saturday 22 March 2025 22:47:11 +0000 (0:00:03.114) 0:05:43.369 ******** 2025-03-22 22:47:11.730402 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-03-22 22:47:11.730907 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-03-22 22:47:11.731193 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-03-22 22:47:11.820564 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:47:11.820995 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-03-22 22:47:11.824133 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-03-22 22:47:12.083333 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-03-22 22:47:12.083534 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-03-22 22:47:12.083942 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-03-22 22:47:12.083976 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-03-22 22:47:12.163554 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:12.164314 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-03-22 22:47:12.164352 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-03-22 22:47:12.248840 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-03-22 22:47:12.249507 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:12.250394 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-03-22 22:47:12.254176 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-03-22 22:47:12.336046 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-03-22 22:47:12.336152 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:12.336234 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-03-22 22:47:12.336441 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-03-22 22:47:12.336843 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-03-22 22:47:12.524884 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:12.525272 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:12.525300 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-03-22 22:47:12.525718 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-03-22 22:47:12.526274 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-03-22 22:47:12.526518 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:12.527041 | orchestrator | 2025-03-22 22:47:12.527716 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-03-22 22:47:12.527935 | orchestrator | Saturday 22 March 2025 22:47:12 +0000 (0:00:00.896) 0:05:44.266 ******** 2025-03-22 22:47:20.396683 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:20.396912 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:20.398359 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:20.400229 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:20.401126 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:20.402191 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:20.402989 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:20.404040 | orchestrator | 2025-03-22 22:47:20.404745 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-03-22 22:47:20.405356 | orchestrator | Saturday 22 March 2025 22:47:20 +0000 (0:00:07.866) 0:05:52.133 ******** 2025-03-22 22:47:21.708653 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:21.709006 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:21.710456 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:21.712296 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:21.713144 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:21.715814 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:21.716407 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:21.717557 | orchestrator | 2025-03-22 22:47:21.718721 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-03-22 22:47:21.719956 | orchestrator | Saturday 22 March 2025 22:47:21 +0000 (0:00:01.315) 0:05:53.448 ******** 2025-03-22 22:47:30.513868 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:30.514911 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:30.515243 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:30.515702 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:30.519044 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:30.519637 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:30.520322 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:30.521050 | orchestrator | 2025-03-22 22:47:30.521401 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-03-22 22:47:30.521899 | orchestrator | Saturday 22 March 2025 22:47:30 +0000 (0:00:08.804) 0:06:02.253 ******** 2025-03-22 22:47:33.996462 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:33.996621 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:33.996644 | orchestrator | changed: [testbed-manager] 2025-03-22 22:47:33.996664 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:33.997756 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:33.998974 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:33.999005 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:34.000397 | orchestrator | 2025-03-22 22:47:34.001585 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-03-22 22:47:34.003177 | orchestrator | Saturday 22 March 2025 22:47:33 +0000 (0:00:03.481) 0:06:05.735 ******** 2025-03-22 22:47:35.747583 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:35.748157 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:35.748890 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:35.749660 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:35.750684 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:35.752513 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:35.753745 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:37.131105 | orchestrator | 2025-03-22 22:47:37.131289 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-03-22 22:47:37.131321 | orchestrator | Saturday 22 March 2025 22:47:35 +0000 (0:00:01.753) 0:06:07.488 ******** 2025-03-22 22:47:37.131349 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:37.131420 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:37.131818 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:37.133446 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:37.133917 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:37.134998 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:37.135268 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:37.136024 | orchestrator | 2025-03-22 22:47:37.136747 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-03-22 22:47:37.137065 | orchestrator | Saturday 22 March 2025 22:47:37 +0000 (0:00:01.380) 0:06:08.869 ******** 2025-03-22 22:47:37.357811 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:47:37.422864 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:47:37.499483 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:47:37.571690 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:47:37.810013 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:47:37.811020 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:47:37.812123 | orchestrator | changed: [testbed-manager] 2025-03-22 22:47:37.812990 | orchestrator | 2025-03-22 22:47:37.814281 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-03-22 22:47:37.815938 | orchestrator | Saturday 22 March 2025 22:47:37 +0000 (0:00:00.681) 0:06:09.550 ******** 2025-03-22 22:47:48.740808 | orchestrator | ok: [testbed-manager] 2025-03-22 22:47:48.741260 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:48.741297 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:48.741312 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:48.741327 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:48.741341 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:48.741355 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:48.741370 | orchestrator | 2025-03-22 22:47:48.741393 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-03-22 22:47:48.741986 | orchestrator | Saturday 22 March 2025 22:47:48 +0000 (0:00:10.922) 0:06:20.473 ******** 2025-03-22 22:47:50.044961 | orchestrator | changed: [testbed-manager] 2025-03-22 22:47:50.045786 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:47:50.046433 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:47:50.047929 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:47:50.048835 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:47:50.049586 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:47:50.050108 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:47:50.051035 | orchestrator | 2025-03-22 22:47:50.052114 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-03-22 22:47:50.052781 | orchestrator | Saturday 22 March 2025 22:47:50 +0000 (0:00:01.306) 0:06:21.779 ******** 2025-03-22 22:48:00.052015 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:00.053132 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:00.053188 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:00.055278 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:00.056605 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:00.057110 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:00.057963 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:00.058904 | orchestrator | 2025-03-22 22:48:00.059602 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-03-22 22:48:00.060001 | orchestrator | Saturday 22 March 2025 22:48:00 +0000 (0:00:10.008) 0:06:31.788 ******** 2025-03-22 22:48:11.966458 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:11.967146 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:11.967185 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:11.967227 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:11.967251 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:11.967660 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:11.968172 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:11.968905 | orchestrator | 2025-03-22 22:48:11.968996 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-03-22 22:48:11.969492 | orchestrator | Saturday 22 March 2025 22:48:11 +0000 (0:00:11.913) 0:06:43.701 ******** 2025-03-22 22:48:12.417756 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-03-22 22:48:13.242308 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-03-22 22:48:13.243467 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-03-22 22:48:13.248025 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-03-22 22:48:13.248275 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-03-22 22:48:13.249413 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-03-22 22:48:13.252122 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-03-22 22:48:13.253128 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-03-22 22:48:13.253301 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-03-22 22:48:13.256679 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-03-22 22:48:13.257680 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-03-22 22:48:13.258648 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-03-22 22:48:13.259865 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-03-22 22:48:13.260439 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-03-22 22:48:13.261456 | orchestrator | 2025-03-22 22:48:13.262394 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-03-22 22:48:13.263678 | orchestrator | Saturday 22 March 2025 22:48:13 +0000 (0:00:01.280) 0:06:44.982 ******** 2025-03-22 22:48:13.383778 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:13.455875 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:13.523609 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:13.594148 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:13.685997 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:13.808005 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:13.808316 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:13.808529 | orchestrator | 2025-03-22 22:48:13.810134 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-03-22 22:48:13.811484 | orchestrator | Saturday 22 March 2025 22:48:13 +0000 (0:00:00.566) 0:06:45.548 ******** 2025-03-22 22:48:18.476826 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:18.477712 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:18.478408 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:18.480104 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:18.481374 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:18.482496 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:18.482992 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:18.485650 | orchestrator | 2025-03-22 22:48:18.489046 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-03-22 22:48:18.644586 | orchestrator | Saturday 22 March 2025 22:48:18 +0000 (0:00:04.667) 0:06:50.216 ******** 2025-03-22 22:48:18.644702 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:18.712280 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:18.792833 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:18.864332 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:18.932173 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:19.048441 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:19.050174 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:19.050584 | orchestrator | 2025-03-22 22:48:19.051457 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-03-22 22:48:19.052352 | orchestrator | Saturday 22 March 2025 22:48:19 +0000 (0:00:00.575) 0:06:50.791 ******** 2025-03-22 22:48:19.137183 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-03-22 22:48:19.222529 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-03-22 22:48:19.222585 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:19.223352 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-03-22 22:48:19.224431 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-03-22 22:48:19.307697 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:19.308429 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-03-22 22:48:19.308887 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-03-22 22:48:19.391899 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:19.392614 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-03-22 22:48:19.393386 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-03-22 22:48:19.479609 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:19.481283 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-03-22 22:48:19.482156 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-03-22 22:48:19.571278 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:19.571754 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-03-22 22:48:19.572795 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-03-22 22:48:19.695831 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:19.697110 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-03-22 22:48:19.697993 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-03-22 22:48:19.698749 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:19.699561 | orchestrator | 2025-03-22 22:48:19.700335 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-03-22 22:48:19.701428 | orchestrator | Saturday 22 March 2025 22:48:19 +0000 (0:00:00.644) 0:06:51.435 ******** 2025-03-22 22:48:19.860586 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:19.946978 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:20.022254 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:20.100900 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:20.170698 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:20.282258 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:20.283068 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:20.284266 | orchestrator | 2025-03-22 22:48:20.285147 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-03-22 22:48:20.286574 | orchestrator | Saturday 22 March 2025 22:48:20 +0000 (0:00:00.585) 0:06:52.021 ******** 2025-03-22 22:48:20.442671 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:20.531909 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:20.608437 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:20.692439 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:20.762198 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:20.891456 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:20.893855 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:20.897086 | orchestrator | 2025-03-22 22:48:20.897532 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-03-22 22:48:20.898467 | orchestrator | Saturday 22 March 2025 22:48:20 +0000 (0:00:00.611) 0:06:52.633 ******** 2025-03-22 22:48:21.043455 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:21.343526 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:21.419802 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:21.504249 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:21.582246 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:21.715337 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:21.718273 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:21.719484 | orchestrator | 2025-03-22 22:48:21.721055 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-03-22 22:48:21.721100 | orchestrator | Saturday 22 March 2025 22:48:21 +0000 (0:00:00.821) 0:06:53.455 ******** 2025-03-22 22:48:23.644056 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:23.644475 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:23.645505 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:23.646400 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:23.648415 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:23.648876 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:23.648919 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:23.649277 | orchestrator | 2025-03-22 22:48:23.649574 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-03-22 22:48:23.649843 | orchestrator | Saturday 22 March 2025 22:48:23 +0000 (0:00:01.927) 0:06:55.382 ******** 2025-03-22 22:48:24.719382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:48:24.721247 | orchestrator | 2025-03-22 22:48:24.721608 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-03-22 22:48:24.724280 | orchestrator | Saturday 22 March 2025 22:48:24 +0000 (0:00:01.076) 0:06:56.459 ******** 2025-03-22 22:48:25.227314 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:25.729681 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:25.729861 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:25.731017 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:25.731548 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:25.732183 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:25.733170 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:25.734538 | orchestrator | 2025-03-22 22:48:25.735601 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-03-22 22:48:25.736594 | orchestrator | Saturday 22 March 2025 22:48:25 +0000 (0:00:01.010) 0:06:57.470 ******** 2025-03-22 22:48:26.231908 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:26.934548 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:26.934802 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:26.934913 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:26.935643 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:26.937472 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:26.938090 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:26.938415 | orchestrator | 2025-03-22 22:48:26.939053 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-03-22 22:48:26.940028 | orchestrator | Saturday 22 March 2025 22:48:26 +0000 (0:00:01.205) 0:06:58.676 ******** 2025-03-22 22:48:28.374436 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:28.376414 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:28.377336 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:28.378128 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:28.379071 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:28.379327 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:28.379924 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:28.380561 | orchestrator | 2025-03-22 22:48:28.381246 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-03-22 22:48:28.381570 | orchestrator | Saturday 22 March 2025 22:48:28 +0000 (0:00:01.435) 0:07:00.111 ******** 2025-03-22 22:48:28.527279 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:29.865699 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:29.865880 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:29.866486 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:29.866518 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:29.866742 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:29.867414 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:29.868917 | orchestrator | 2025-03-22 22:48:29.869048 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-03-22 22:48:29.869075 | orchestrator | Saturday 22 March 2025 22:48:29 +0000 (0:00:01.493) 0:07:01.605 ******** 2025-03-22 22:48:31.290729 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:31.290868 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:31.290890 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:31.291007 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:31.291385 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:31.291722 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:31.292303 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:31.292804 | orchestrator | 2025-03-22 22:48:31.293235 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-03-22 22:48:31.293550 | orchestrator | Saturday 22 March 2025 22:48:31 +0000 (0:00:01.426) 0:07:03.031 ******** 2025-03-22 22:48:33.172119 | orchestrator | changed: [testbed-manager] 2025-03-22 22:48:33.172983 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:33.174445 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:33.175723 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:33.177400 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:33.178626 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:33.179839 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:33.181098 | orchestrator | 2025-03-22 22:48:33.182231 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-03-22 22:48:33.183331 | orchestrator | Saturday 22 March 2025 22:48:33 +0000 (0:00:01.878) 0:07:04.910 ******** 2025-03-22 22:48:34.236368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:48:34.237060 | orchestrator | 2025-03-22 22:48:34.237106 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-03-22 22:48:34.237794 | orchestrator | Saturday 22 March 2025 22:48:34 +0000 (0:00:01.066) 0:07:05.976 ******** 2025-03-22 22:48:36.012737 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:36.012891 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:36.012917 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:36.013556 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:36.014862 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:36.015347 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:36.015587 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:36.016984 | orchestrator | 2025-03-22 22:48:36.018351 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-03-22 22:48:36.018908 | orchestrator | Saturday 22 March 2025 22:48:36 +0000 (0:00:01.773) 0:07:07.750 ******** 2025-03-22 22:48:37.264190 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:37.264398 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:37.264419 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:37.264462 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:37.264836 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:37.265285 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:37.268173 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:39.126590 | orchestrator | 2025-03-22 22:48:39.126694 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-03-22 22:48:39.126712 | orchestrator | Saturday 22 March 2025 22:48:37 +0000 (0:00:01.254) 0:07:09.005 ******** 2025-03-22 22:48:39.126741 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:39.127947 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:39.127980 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:39.128940 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:39.129332 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:39.129599 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:39.133798 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:39.134003 | orchestrator | 2025-03-22 22:48:39.134870 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-03-22 22:48:39.135107 | orchestrator | Saturday 22 March 2025 22:48:39 +0000 (0:00:01.859) 0:07:10.864 ******** 2025-03-22 22:48:40.405493 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:40.405918 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:40.405940 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:40.406832 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:40.406952 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:40.407879 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:40.412658 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:40.413475 | orchestrator | 2025-03-22 22:48:40.413639 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-03-22 22:48:40.414415 | orchestrator | Saturday 22 March 2025 22:48:40 +0000 (0:00:01.280) 0:07:12.144 ******** 2025-03-22 22:48:42.002619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:48:42.003014 | orchestrator | 2025-03-22 22:48:42.003419 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.003451 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:01.060) 0:07:13.205 ******** 2025-03-22 22:48:42.003810 | orchestrator | 2025-03-22 22:48:42.005172 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.005627 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.053) 0:07:13.258 ******** 2025-03-22 22:48:42.007030 | orchestrator | 2025-03-22 22:48:42.007366 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.008187 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.049) 0:07:13.307 ******** 2025-03-22 22:48:42.009911 | orchestrator | 2025-03-22 22:48:42.010842 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.010971 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.041) 0:07:13.348 ******** 2025-03-22 22:48:42.011680 | orchestrator | 2025-03-22 22:48:42.012111 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.013162 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.048) 0:07:13.397 ******** 2025-03-22 22:48:42.013337 | orchestrator | 2025-03-22 22:48:42.014176 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.015407 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.039) 0:07:13.437 ******** 2025-03-22 22:48:42.015998 | orchestrator | 2025-03-22 22:48:42.016434 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-22 22:48:42.017544 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.041) 0:07:13.478 ******** 2025-03-22 22:48:42.017897 | orchestrator | 2025-03-22 22:48:42.018454 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-22 22:48:42.019103 | orchestrator | Saturday 22 March 2025 22:48:41 +0000 (0:00:00.263) 0:07:13.742 ******** 2025-03-22 22:48:43.259994 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:43.260788 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:43.261791 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:43.263449 | orchestrator | 2025-03-22 22:48:43.264141 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-03-22 22:48:43.265352 | orchestrator | Saturday 22 March 2025 22:48:43 +0000 (0:00:01.253) 0:07:14.996 ******** 2025-03-22 22:48:44.694173 | orchestrator | changed: [testbed-manager] 2025-03-22 22:48:44.694359 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:44.694382 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:44.695078 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:44.696373 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:44.696740 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:44.697148 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:44.697564 | orchestrator | 2025-03-22 22:48:44.698521 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-03-22 22:48:45.894482 | orchestrator | Saturday 22 March 2025 22:48:44 +0000 (0:00:01.438) 0:07:16.434 ******** 2025-03-22 22:48:45.894631 | orchestrator | changed: [testbed-manager] 2025-03-22 22:48:45.894698 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:45.895352 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:45.895516 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:45.896111 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:45.896360 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:45.896911 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:45.897892 | orchestrator | 2025-03-22 22:48:45.898456 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-03-22 22:48:45.898486 | orchestrator | Saturday 22 March 2025 22:48:45 +0000 (0:00:01.200) 0:07:17.635 ******** 2025-03-22 22:48:46.035407 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:48.001034 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:48.001787 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:48.003617 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:48.004334 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:48.004985 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:48.005431 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:48.005787 | orchestrator | 2025-03-22 22:48:48.006427 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-03-22 22:48:48.006532 | orchestrator | Saturday 22 March 2025 22:48:47 +0000 (0:00:02.103) 0:07:19.738 ******** 2025-03-22 22:48:48.117558 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:48.119660 | orchestrator | 2025-03-22 22:48:48.121059 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-03-22 22:48:48.121992 | orchestrator | Saturday 22 March 2025 22:48:48 +0000 (0:00:00.118) 0:07:19.856 ******** 2025-03-22 22:48:49.502982 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:49.503494 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:48:49.503543 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:48:49.504608 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:48:49.505623 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:48:49.506878 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:48:49.507963 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:48:49.509336 | orchestrator | 2025-03-22 22:48:49.509566 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-03-22 22:48:49.510241 | orchestrator | Saturday 22 March 2025 22:48:49 +0000 (0:00:01.379) 0:07:21.236 ******** 2025-03-22 22:48:49.643445 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:49.733298 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:49.804019 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:49.876409 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:49.957956 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:50.114132 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:50.114919 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:50.118751 | orchestrator | 2025-03-22 22:48:50.119423 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-03-22 22:48:50.119468 | orchestrator | Saturday 22 March 2025 22:48:50 +0000 (0:00:00.615) 0:07:21.852 ******** 2025-03-22 22:48:51.117467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:48:51.121046 | orchestrator | 2025-03-22 22:48:51.121098 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-03-22 22:48:51.704853 | orchestrator | Saturday 22 March 2025 22:48:51 +0000 (0:00:01.004) 0:07:22.856 ******** 2025-03-22 22:48:51.704967 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:52.184354 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:52.185127 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:52.185166 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:52.185183 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:52.185199 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:52.185245 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:52.185266 | orchestrator | 2025-03-22 22:48:52.186238 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-03-22 22:48:52.187089 | orchestrator | Saturday 22 March 2025 22:48:52 +0000 (0:00:01.060) 0:07:23.917 ******** 2025-03-22 22:48:55.160650 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-03-22 22:48:55.161363 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-03-22 22:48:55.162089 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-03-22 22:48:55.163770 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-03-22 22:48:55.164519 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-03-22 22:48:55.164960 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-03-22 22:48:55.165876 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-03-22 22:48:55.166455 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-03-22 22:48:55.167273 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-03-22 22:48:55.167660 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-03-22 22:48:55.168412 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-03-22 22:48:55.169191 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-03-22 22:48:55.170200 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-03-22 22:48:55.170789 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-03-22 22:48:55.171375 | orchestrator | 2025-03-22 22:48:55.171597 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-03-22 22:48:55.172463 | orchestrator | Saturday 22 March 2025 22:48:55 +0000 (0:00:02.982) 0:07:26.900 ******** 2025-03-22 22:48:55.316079 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:55.388974 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:55.458988 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:55.568664 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:55.637545 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:55.749702 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:55.750796 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:55.750827 | orchestrator | 2025-03-22 22:48:55.750849 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-03-22 22:48:55.751266 | orchestrator | Saturday 22 March 2025 22:48:55 +0000 (0:00:00.586) 0:07:27.487 ******** 2025-03-22 22:48:56.696676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:48:56.697616 | orchestrator | 2025-03-22 22:48:56.698174 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-03-22 22:48:56.698851 | orchestrator | Saturday 22 March 2025 22:48:56 +0000 (0:00:00.950) 0:07:28.437 ******** 2025-03-22 22:48:57.168988 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:57.243395 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:57.847782 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:57.848311 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:57.848559 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:57.850148 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:57.850359 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:57.850401 | orchestrator | 2025-03-22 22:48:57.850973 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-03-22 22:48:57.851361 | orchestrator | Saturday 22 March 2025 22:48:57 +0000 (0:00:01.150) 0:07:29.587 ******** 2025-03-22 22:48:58.340713 | orchestrator | ok: [testbed-manager] 2025-03-22 22:48:58.806572 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:48:58.807060 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:48:58.807095 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:48:58.807848 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:48:58.808449 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:48:58.809408 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:48:58.809885 | orchestrator | 2025-03-22 22:48:58.809942 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-03-22 22:48:58.810065 | orchestrator | Saturday 22 March 2025 22:48:58 +0000 (0:00:00.961) 0:07:30.549 ******** 2025-03-22 22:48:58.958691 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:48:59.054862 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:48:59.152261 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:48:59.239828 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:48:59.321672 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:48:59.426913 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:48:59.427450 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:48:59.427500 | orchestrator | 2025-03-22 22:48:59.428087 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-03-22 22:48:59.428327 | orchestrator | Saturday 22 March 2025 22:48:59 +0000 (0:00:00.617) 0:07:31.167 ******** 2025-03-22 22:49:01.147756 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:01.148174 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:01.148436 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:01.149198 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:01.153280 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:01.321646 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:01.321749 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:01.321763 | orchestrator | 2025-03-22 22:49:01.321778 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-03-22 22:49:01.321792 | orchestrator | Saturday 22 March 2025 22:49:01 +0000 (0:00:01.720) 0:07:32.887 ******** 2025-03-22 22:49:01.321819 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:49:01.408337 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:49:01.496179 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:49:01.566804 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:49:01.634285 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:49:01.749461 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:49:01.749591 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:49:01.750906 | orchestrator | 2025-03-22 22:49:01.750952 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-03-22 22:49:01.751602 | orchestrator | Saturday 22 March 2025 22:49:01 +0000 (0:00:00.604) 0:07:33.492 ******** 2025-03-22 22:49:11.204651 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:11.204837 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:11.204866 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:11.207032 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:11.207523 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:11.207882 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:11.208076 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:11.208721 | orchestrator | 2025-03-22 22:49:11.209135 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-03-22 22:49:11.209553 | orchestrator | Saturday 22 March 2025 22:49:11 +0000 (0:00:09.450) 0:07:42.943 ******** 2025-03-22 22:49:12.669469 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:12.672310 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:12.672389 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:12.676164 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:12.676399 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:12.676427 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:12.676442 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:12.676458 | orchestrator | 2025-03-22 22:49:12.676480 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-03-22 22:49:12.678853 | orchestrator | Saturday 22 March 2025 22:49:12 +0000 (0:00:01.466) 0:07:44.409 ******** 2025-03-22 22:49:14.512779 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:14.514852 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:14.516042 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:14.517868 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:14.519043 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:14.519905 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:14.520678 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:14.521079 | orchestrator | 2025-03-22 22:49:14.521895 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-03-22 22:49:14.522602 | orchestrator | Saturday 22 March 2025 22:49:14 +0000 (0:00:01.841) 0:07:46.251 ******** 2025-03-22 22:49:16.557689 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:16.557943 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:16.558078 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:16.558763 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:16.558916 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:16.559427 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:16.563607 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:16.563636 | orchestrator | 2025-03-22 22:49:16.564590 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-22 22:49:16.564683 | orchestrator | Saturday 22 March 2025 22:49:16 +0000 (0:00:02.045) 0:07:48.297 ******** 2025-03-22 22:49:17.162307 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:17.663497 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:17.664287 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:17.664978 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:17.665008 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:17.668319 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:17.668351 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:17.806357 | orchestrator | 2025-03-22 22:49:17.806393 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-22 22:49:17.806410 | orchestrator | Saturday 22 March 2025 22:49:17 +0000 (0:00:01.105) 0:07:49.402 ******** 2025-03-22 22:49:17.806462 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:49:17.892977 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:49:17.969440 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:49:18.044693 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:49:18.114975 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:49:18.592266 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:49:18.595745 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:49:18.597684 | orchestrator | 2025-03-22 22:49:18.598988 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-03-22 22:49:18.599021 | orchestrator | Saturday 22 March 2025 22:49:18 +0000 (0:00:00.932) 0:07:50.334 ******** 2025-03-22 22:49:18.744517 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:49:18.816497 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:49:18.883555 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:49:18.971253 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:49:19.067980 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:49:19.213600 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:49:19.213736 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:49:19.215378 | orchestrator | 2025-03-22 22:49:19.216035 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-03-22 22:49:19.216512 | orchestrator | Saturday 22 March 2025 22:49:19 +0000 (0:00:00.618) 0:07:50.953 ******** 2025-03-22 22:49:19.384786 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:19.454086 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:19.752173 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:19.828175 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:19.903467 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:20.022460 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:20.023444 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:20.027684 | orchestrator | 2025-03-22 22:49:20.166483 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-03-22 22:49:20.166569 | orchestrator | Saturday 22 March 2025 22:49:20 +0000 (0:00:00.809) 0:07:51.762 ******** 2025-03-22 22:49:20.166596 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:20.248904 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:20.322124 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:20.407841 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:20.495516 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:20.620501 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:20.621426 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:20.622420 | orchestrator | 2025-03-22 22:49:20.623903 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-03-22 22:49:20.624156 | orchestrator | Saturday 22 March 2025 22:49:20 +0000 (0:00:00.598) 0:07:52.360 ******** 2025-03-22 22:49:20.784629 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:20.855698 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:20.929087 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:21.011186 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:21.099801 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:21.233534 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:21.234965 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:21.235728 | orchestrator | 2025-03-22 22:49:21.237081 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-03-22 22:49:21.237870 | orchestrator | Saturday 22 March 2025 22:49:21 +0000 (0:00:00.612) 0:07:52.972 ******** 2025-03-22 22:49:26.551678 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:26.551849 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:26.552384 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:26.553430 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:26.554158 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:26.555268 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:26.555726 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:26.559293 | orchestrator | 2025-03-22 22:49:26.712705 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-03-22 22:49:26.712812 | orchestrator | Saturday 22 March 2025 22:49:26 +0000 (0:00:05.320) 0:07:58.293 ******** 2025-03-22 22:49:26.712839 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:49:26.790765 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:49:26.861251 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:49:26.937375 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:49:27.017894 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:49:27.629574 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:49:27.630011 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:49:27.630471 | orchestrator | 2025-03-22 22:49:27.634359 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-03-22 22:49:28.613782 | orchestrator | Saturday 22 March 2025 22:49:27 +0000 (0:00:01.072) 0:07:59.366 ******** 2025-03-22 22:49:28.613902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:49:28.614621 | orchestrator | 2025-03-22 22:49:28.615265 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-03-22 22:49:28.619688 | orchestrator | Saturday 22 March 2025 22:49:28 +0000 (0:00:00.985) 0:08:00.351 ******** 2025-03-22 22:49:30.557238 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:30.557400 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:30.559590 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:30.560072 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:30.561125 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:30.562121 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:30.562156 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:30.562250 | orchestrator | 2025-03-22 22:49:30.562907 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-03-22 22:49:30.564146 | orchestrator | Saturday 22 March 2025 22:49:30 +0000 (0:00:01.943) 0:08:02.295 ******** 2025-03-22 22:49:31.845105 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:31.846070 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:31.847115 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:31.848027 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:31.848653 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:31.849272 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:31.849831 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:31.850742 | orchestrator | 2025-03-22 22:49:31.851288 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-03-22 22:49:31.851551 | orchestrator | Saturday 22 March 2025 22:49:31 +0000 (0:00:01.289) 0:08:03.585 ******** 2025-03-22 22:49:32.468517 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:32.540925 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:32.985798 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:32.985929 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:32.985948 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:32.985963 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:32.985981 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:32.986248 | orchestrator | 2025-03-22 22:49:32.986678 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-03-22 22:49:32.988572 | orchestrator | Saturday 22 March 2025 22:49:32 +0000 (0:00:01.139) 0:08:04.724 ******** 2025-03-22 22:49:34.956002 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.956317 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.956361 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.957061 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.957565 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.957801 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.958383 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-22 22:49:34.962853 | orchestrator | 2025-03-22 22:49:34.962958 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-03-22 22:49:34.963358 | orchestrator | Saturday 22 March 2025 22:49:34 +0000 (0:00:01.971) 0:08:06.696 ******** 2025-03-22 22:49:35.965185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:49:35.965627 | orchestrator | 2025-03-22 22:49:35.966585 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-03-22 22:49:35.967613 | orchestrator | Saturday 22 March 2025 22:49:35 +0000 (0:00:01.011) 0:08:07.707 ******** 2025-03-22 22:49:46.066286 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:46.066514 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:46.066558 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:46.066574 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:46.066588 | orchestrator | changed: [testbed-manager] 2025-03-22 22:49:46.066602 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:46.066623 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:46.067814 | orchestrator | 2025-03-22 22:49:46.067852 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-03-22 22:49:46.068068 | orchestrator | Saturday 22 March 2025 22:49:46 +0000 (0:00:10.095) 0:08:17.803 ******** 2025-03-22 22:49:47.990381 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:47.991433 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:47.992673 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:47.993312 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:47.993342 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:47.993817 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:47.994748 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:47.995332 | orchestrator | 2025-03-22 22:49:47.996477 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-03-22 22:49:47.997088 | orchestrator | Saturday 22 March 2025 22:49:47 +0000 (0:00:01.927) 0:08:19.730 ******** 2025-03-22 22:49:49.431608 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:49.432145 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:49.432183 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:49.432773 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:49.432805 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:49.433170 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:49.433194 | orchestrator | 2025-03-22 22:49:49.433240 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-03-22 22:49:49.434474 | orchestrator | Saturday 22 March 2025 22:49:49 +0000 (0:00:01.439) 0:08:21.170 ******** 2025-03-22 22:49:50.998836 | orchestrator | changed: [testbed-manager] 2025-03-22 22:49:50.998988 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:50.999713 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:51.000629 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:51.001670 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:51.005664 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:51.141672 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:51.142308 | orchestrator | 2025-03-22 22:49:51.142348 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-03-22 22:49:51.142367 | orchestrator | 2025-03-22 22:49:51.142385 | orchestrator | TASK [Include hardening role] ************************************************** 2025-03-22 22:49:51.142429 | orchestrator | Saturday 22 March 2025 22:49:50 +0000 (0:00:01.570) 0:08:22.740 ******** 2025-03-22 22:49:51.142465 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:49:51.216903 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:49:51.280178 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:49:51.360007 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:49:51.432337 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:49:51.570278 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:49:51.570867 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:49:51.570902 | orchestrator | 2025-03-22 22:49:51.571024 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-03-22 22:49:51.571391 | orchestrator | 2025-03-22 22:49:51.571720 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-03-22 22:49:51.571889 | orchestrator | Saturday 22 March 2025 22:49:51 +0000 (0:00:00.569) 0:08:23.310 ******** 2025-03-22 22:49:53.025496 | orchestrator | changed: [testbed-manager] 2025-03-22 22:49:53.027631 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:53.027677 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:53.027704 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:53.027777 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:53.028350 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:53.028612 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:53.029170 | orchestrator | 2025-03-22 22:49:53.029329 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-03-22 22:49:53.029742 | orchestrator | Saturday 22 March 2025 22:49:53 +0000 (0:00:01.452) 0:08:24.762 ******** 2025-03-22 22:49:54.836330 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:54.836488 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:54.839556 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:54.841576 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:54.841600 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:54.841613 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:54.841625 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:54.841638 | orchestrator | 2025-03-22 22:49:54.841657 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-03-22 22:49:54.844915 | orchestrator | Saturday 22 March 2025 22:49:54 +0000 (0:00:01.812) 0:08:26.574 ******** 2025-03-22 22:49:54.986483 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:49:55.062466 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:49:55.166507 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:49:55.238757 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:49:55.314534 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:49:55.801575 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:49:55.802568 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:49:55.803935 | orchestrator | 2025-03-22 22:49:55.804772 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-03-22 22:49:55.805124 | orchestrator | Saturday 22 March 2025 22:49:55 +0000 (0:00:00.969) 0:08:27.543 ******** 2025-03-22 22:49:57.224143 | orchestrator | changed: [testbed-manager] 2025-03-22 22:49:57.224344 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:49:57.224811 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:49:57.226987 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:49:57.227339 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:49:57.228773 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:49:57.229182 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:49:57.229244 | orchestrator | 2025-03-22 22:49:57.229759 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-03-22 22:49:57.230497 | orchestrator | 2025-03-22 22:49:57.231240 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-03-22 22:49:57.231864 | orchestrator | Saturday 22 March 2025 22:49:57 +0000 (0:00:01.421) 0:08:28.965 ******** 2025-03-22 22:49:58.395619 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:49:58.395930 | orchestrator | 2025-03-22 22:49:58.397867 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-22 22:49:58.398707 | orchestrator | Saturday 22 March 2025 22:49:58 +0000 (0:00:01.168) 0:08:30.133 ******** 2025-03-22 22:49:58.865128 | orchestrator | ok: [testbed-manager] 2025-03-22 22:49:59.382172 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:49:59.382741 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:49:59.383319 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:49:59.383351 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:49:59.383669 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:49:59.384519 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:49:59.384711 | orchestrator | 2025-03-22 22:49:59.385163 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-22 22:49:59.386098 | orchestrator | Saturday 22 March 2025 22:49:59 +0000 (0:00:00.987) 0:08:31.121 ******** 2025-03-22 22:50:00.898560 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:50:00.898751 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:50:00.899020 | orchestrator | changed: [testbed-manager] 2025-03-22 22:50:00.899785 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:50:00.900514 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:50:00.900783 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:50:00.901378 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:50:00.901656 | orchestrator | 2025-03-22 22:50:00.902752 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-03-22 22:50:00.903009 | orchestrator | Saturday 22 March 2025 22:50:00 +0000 (0:00:01.518) 0:08:32.640 ******** 2025-03-22 22:50:02.077563 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:50:02.077817 | orchestrator | 2025-03-22 22:50:02.077853 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-22 22:50:02.078303 | orchestrator | Saturday 22 March 2025 22:50:02 +0000 (0:00:01.171) 0:08:33.811 ******** 2025-03-22 22:50:02.979038 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:02.979266 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:02.979297 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:02.980927 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:02.981442 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:02.982155 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:02.982555 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:02.984432 | orchestrator | 2025-03-22 22:50:02.985529 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-22 22:50:02.986353 | orchestrator | Saturday 22 March 2025 22:50:02 +0000 (0:00:00.901) 0:08:34.712 ******** 2025-03-22 22:50:03.490935 | orchestrator | changed: [testbed-manager] 2025-03-22 22:50:04.330875 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:50:04.331037 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:50:04.331066 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:50:04.331156 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:50:04.331633 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:50:04.331858 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:50:04.332364 | orchestrator | 2025-03-22 22:50:04.332547 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:50:04.332911 | orchestrator | 2025-03-22 22:50:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:50:04.333200 | orchestrator | 2025-03-22 22:50:04 | INFO  | Please wait and do not abort execution. 2025-03-22 22:50:04.333993 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-03-22 22:50:04.334435 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-22 22:50:04.334763 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-22 22:50:04.334967 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-22 22:50:04.335352 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-22 22:50:04.335856 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-22 22:50:04.336063 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-22 22:50:04.336394 | orchestrator | 2025-03-22 22:50:04.336599 | orchestrator | 2025-03-22 22:50:04.337291 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:50:04.338119 | orchestrator | Saturday 22 March 2025 22:50:04 +0000 (0:00:01.359) 0:08:36.072 ******** 2025-03-22 22:50:04.338442 | orchestrator | =============================================================================== 2025-03-22 22:50:04.338973 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.79s 2025-03-22 22:50:04.339379 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.08s 2025-03-22 22:50:04.339789 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.56s 2025-03-22 22:50:04.340132 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.32s 2025-03-22 22:50:04.340449 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.05s 2025-03-22 22:50:04.340555 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.99s 2025-03-22 22:50:04.341274 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.91s 2025-03-22 22:50:04.341379 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.92s 2025-03-22 22:50:04.342085 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.10s 2025-03-22 22:50:04.342238 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.01s 2025-03-22 22:50:04.342722 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 9.81s 2025-03-22 22:50:04.342950 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 9.45s 2025-03-22 22:50:04.343180 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.24s 2025-03-22 22:50:04.343423 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.95s 2025-03-22 22:50:04.343742 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.80s 2025-03-22 22:50:04.344019 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.75s 2025-03-22 22:50:04.344288 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.27s 2025-03-22 22:50:04.344466 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.87s 2025-03-22 22:50:04.344912 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.96s 2025-03-22 22:50:04.345210 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.79s 2025-03-22 22:50:05.243411 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-03-22 22:50:07.504051 | orchestrator | + osism apply network 2025-03-22 22:50:07.504185 | orchestrator | 2025-03-22 22:50:07 | INFO  | Task 9508f9de-de02-4b61-8f32-d08ef0f4863a (network) was prepared for execution. 2025-03-22 22:50:11.629844 | orchestrator | 2025-03-22 22:50:07 | INFO  | It takes a moment until task 9508f9de-de02-4b61-8f32-d08ef0f4863a (network) has been started and output is visible here. 2025-03-22 22:50:11.629977 | orchestrator | 2025-03-22 22:50:11.631386 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-03-22 22:50:11.632508 | orchestrator | 2025-03-22 22:50:11.632537 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-03-22 22:50:11.633250 | orchestrator | Saturday 22 March 2025 22:50:11 +0000 (0:00:00.249) 0:00:00.249 ******** 2025-03-22 22:50:11.808737 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:11.894731 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:11.995532 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:12.078752 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:12.168763 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:12.435559 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:12.435751 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:12.436257 | orchestrator | 2025-03-22 22:50:12.436885 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-03-22 22:50:12.437343 | orchestrator | Saturday 22 March 2025 22:50:12 +0000 (0:00:00.806) 0:00:01.055 ******** 2025-03-22 22:50:13.794088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:50:13.794557 | orchestrator | 2025-03-22 22:50:13.796639 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-03-22 22:50:13.797414 | orchestrator | Saturday 22 March 2025 22:50:13 +0000 (0:00:01.358) 0:00:02.413 ******** 2025-03-22 22:50:16.272393 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:16.273460 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:16.273498 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:16.273895 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:16.274599 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:16.275298 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:16.277412 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:16.278145 | orchestrator | 2025-03-22 22:50:16.279177 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-03-22 22:50:16.279659 | orchestrator | Saturday 22 March 2025 22:50:16 +0000 (0:00:02.479) 0:00:04.892 ******** 2025-03-22 22:50:18.162085 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:18.162880 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:18.164547 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:18.165169 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:18.165415 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:18.165855 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:18.166072 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:18.166495 | orchestrator | 2025-03-22 22:50:18.166919 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-03-22 22:50:18.167357 | orchestrator | Saturday 22 March 2025 22:50:18 +0000 (0:00:01.887) 0:00:06.780 ******** 2025-03-22 22:50:18.716594 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-03-22 22:50:19.264853 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-03-22 22:50:19.265670 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-03-22 22:50:19.267256 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-03-22 22:50:19.267436 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-03-22 22:50:19.267763 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-03-22 22:50:19.268083 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-03-22 22:50:19.269059 | orchestrator | 2025-03-22 22:50:19.269214 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-03-22 22:50:19.269747 | orchestrator | Saturday 22 March 2025 22:50:19 +0000 (0:00:01.107) 0:00:07.888 ******** 2025-03-22 22:50:21.374607 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-22 22:50:21.376382 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-22 22:50:21.377132 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-22 22:50:21.379933 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-22 22:50:21.381532 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-22 22:50:21.382500 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-22 22:50:21.383731 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-22 22:50:21.388319 | orchestrator | 2025-03-22 22:50:21.388832 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-03-22 22:50:21.389312 | orchestrator | Saturday 22 March 2025 22:50:21 +0000 (0:00:02.103) 0:00:09.991 ******** 2025-03-22 22:50:23.142803 | orchestrator | changed: [testbed-manager] 2025-03-22 22:50:23.143551 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:50:23.143632 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:50:23.144395 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:50:23.145712 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:50:23.146586 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:50:23.146977 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:50:23.147510 | orchestrator | 2025-03-22 22:50:23.147865 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-03-22 22:50:23.148322 | orchestrator | Saturday 22 March 2025 22:50:23 +0000 (0:00:01.768) 0:00:11.760 ******** 2025-03-22 22:50:23.760657 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-22 22:50:23.906134 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-22 22:50:24.418596 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-22 22:50:24.419642 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-22 22:50:24.420683 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-22 22:50:24.422158 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-22 22:50:24.424674 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-22 22:50:24.427146 | orchestrator | 2025-03-22 22:50:24.427273 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-03-22 22:50:24.428085 | orchestrator | Saturday 22 March 2025 22:50:24 +0000 (0:00:01.280) 0:00:13.041 ******** 2025-03-22 22:50:24.902166 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:25.751048 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:25.754582 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:25.755598 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:25.755628 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:25.755648 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:25.756783 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:25.757670 | orchestrator | 2025-03-22 22:50:25.758377 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-03-22 22:50:25.759065 | orchestrator | Saturday 22 March 2025 22:50:25 +0000 (0:00:01.327) 0:00:14.368 ******** 2025-03-22 22:50:25.942905 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:50:26.033031 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:50:26.119091 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:50:26.217463 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:50:26.316282 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:50:26.472114 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:50:26.472616 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:50:26.472948 | orchestrator | 2025-03-22 22:50:26.473325 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-03-22 22:50:26.473845 | orchestrator | Saturday 22 March 2025 22:50:26 +0000 (0:00:00.726) 0:00:15.095 ******** 2025-03-22 22:50:28.851003 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:28.854077 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:28.854115 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:28.854130 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:28.854152 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:28.854430 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:28.856290 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:28.856983 | orchestrator | 2025-03-22 22:50:28.858005 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-03-22 22:50:28.858722 | orchestrator | Saturday 22 March 2025 22:50:28 +0000 (0:00:02.368) 0:00:17.463 ******** 2025-03-22 22:50:29.146900 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:50:29.260902 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:50:29.347459 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:50:29.464636 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:50:29.843480 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:50:29.844117 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:50:29.845465 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-03-22 22:50:29.846521 | orchestrator | 2025-03-22 22:50:29.848047 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-03-22 22:50:29.849691 | orchestrator | Saturday 22 March 2025 22:50:29 +0000 (0:00:01.001) 0:00:18.465 ******** 2025-03-22 22:50:31.658311 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:31.659050 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:50:31.660062 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:50:31.661219 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:50:31.661923 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:50:31.662943 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:50:31.663756 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:50:31.664178 | orchestrator | 2025-03-22 22:50:31.666092 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-03-22 22:50:31.666698 | orchestrator | Saturday 22 March 2025 22:50:31 +0000 (0:00:01.811) 0:00:20.276 ******** 2025-03-22 22:50:33.109651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:50:33.109960 | orchestrator | 2025-03-22 22:50:33.111216 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-22 22:50:33.111572 | orchestrator | Saturday 22 March 2025 22:50:33 +0000 (0:00:01.451) 0:00:21.727 ******** 2025-03-22 22:50:33.940192 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:34.902412 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:34.903215 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:34.906568 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:34.909334 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:34.910827 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:34.912117 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:34.913672 | orchestrator | 2025-03-22 22:50:34.915616 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-03-22 22:50:34.915914 | orchestrator | Saturday 22 March 2025 22:50:34 +0000 (0:00:01.790) 0:00:23.518 ******** 2025-03-22 22:50:35.088148 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:35.201150 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:35.315414 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:35.426782 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:35.547471 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:35.699219 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:35.700760 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:35.703492 | orchestrator | 2025-03-22 22:50:36.457815 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-22 22:50:36.457923 | orchestrator | Saturday 22 March 2025 22:50:35 +0000 (0:00:00.802) 0:00:24.321 ******** 2025-03-22 22:50:36.457956 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:36.458338 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:36.459197 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:36.459628 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:36.460012 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:36.463161 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:36.576069 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:36.576130 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:36.576157 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:37.075161 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:37.075314 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:37.075395 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:37.075543 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-22 22:50:37.075569 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-03-22 22:50:37.076267 | orchestrator | 2025-03-22 22:50:37.077214 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-03-22 22:50:37.077418 | orchestrator | Saturday 22 March 2025 22:50:37 +0000 (0:00:01.374) 0:00:25.695 ******** 2025-03-22 22:50:37.278495 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:50:37.375673 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:50:37.467440 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:50:37.575642 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:50:37.684436 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:50:37.832113 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:50:37.832820 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:50:37.833519 | orchestrator | 2025-03-22 22:50:37.834353 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-03-22 22:50:37.836026 | orchestrator | Saturday 22 March 2025 22:50:37 +0000 (0:00:00.761) 0:00:26.456 ******** 2025-03-22 22:50:41.926268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:50:41.926690 | orchestrator | 2025-03-22 22:50:41.926728 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-03-22 22:50:41.926752 | orchestrator | Saturday 22 March 2025 22:50:41 +0000 (0:00:04.087) 0:00:30.543 ******** 2025-03-22 22:50:47.661735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.662121 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.662946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.663277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.663686 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.663716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.664183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.667314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.667395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:47.667419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.667969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.668455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.668914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.669280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:47.669751 | orchestrator | 2025-03-22 22:50:47.670188 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-03-22 22:50:47.670762 | orchestrator | Saturday 22 March 2025 22:50:47 +0000 (0:00:05.735) 0:00:36.279 ******** 2025-03-22 22:50:53.885141 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.886625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.887493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.887532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.890497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.891521 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.891575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.891606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-03-22 22:50:53.891623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.891639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.891662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.892678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.893541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.893572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-03-22 22:50:53.894003 | orchestrator | 2025-03-22 22:50:53.894638 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-03-22 22:50:53.895383 | orchestrator | Saturday 22 March 2025 22:50:53 +0000 (0:00:06.224) 0:00:42.504 ******** 2025-03-22 22:50:55.494932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:50:55.495133 | orchestrator | 2025-03-22 22:50:55.495968 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-22 22:50:55.496077 | orchestrator | Saturday 22 March 2025 22:50:55 +0000 (0:00:01.609) 0:00:44.114 ******** 2025-03-22 22:50:56.030735 | orchestrator | ok: [testbed-manager] 2025-03-22 22:50:56.133121 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:50:56.613840 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:50:56.616388 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:50:56.617787 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:50:56.617886 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:50:56.618722 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:50:56.619081 | orchestrator | 2025-03-22 22:50:56.619548 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-22 22:50:56.620340 | orchestrator | Saturday 22 March 2025 22:50:56 +0000 (0:00:01.120) 0:00:45.234 ******** 2025-03-22 22:50:56.726842 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:56.727281 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:56.728168 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:56.728997 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:56.834054 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:50:56.834152 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:56.836152 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:56.837478 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:56.838641 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:56.952364 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:50:56.952535 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:56.952834 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:56.953818 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:56.954627 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:57.300990 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:50:57.301607 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:57.303277 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:57.305096 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:57.444949 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:57.445047 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:50:57.446204 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:57.447186 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:57.448108 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:57.454128 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:57.557285 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:50:57.557491 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:57.558651 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:57.559436 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:57.561313 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:59.003027 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:50:59.003523 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-22 22:50:59.005133 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-22 22:50:59.005843 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-22 22:50:59.007070 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-22 22:50:59.007822 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:50:59.008164 | orchestrator | 2025-03-22 22:50:59.009311 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-03-22 22:50:59.215488 | orchestrator | Saturday 22 March 2025 22:50:58 +0000 (0:00:02.386) 0:00:47.621 ******** 2025-03-22 22:50:59.215563 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:50:59.313650 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:50:59.402819 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:50:59.490297 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:50:59.598729 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:50:59.741841 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:50:59.743260 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:50:59.744007 | orchestrator | 2025-03-22 22:50:59.744632 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-03-22 22:50:59.745349 | orchestrator | Saturday 22 March 2025 22:50:59 +0000 (0:00:00.743) 0:00:48.365 ******** 2025-03-22 22:51:00.130309 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:51:00.228012 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:51:00.336165 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:51:00.434159 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:51:00.534159 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:51:00.568017 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:51:00.568895 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:51:00.569391 | orchestrator | 2025-03-22 22:51:00.569671 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:51:00.570453 | orchestrator | 2025-03-22 22:51:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:51:00.572164 | orchestrator | 2025-03-22 22:51:00 | INFO  | Please wait and do not abort execution. 2025-03-22 22:51:00.572197 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-22 22:51:00.574399 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-22 22:51:00.575067 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-22 22:51:00.575098 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-22 22:51:00.576462 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-22 22:51:00.577339 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-22 22:51:00.577805 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-22 22:51:00.578712 | orchestrator | 2025-03-22 22:51:00.579633 | orchestrator | 2025-03-22 22:51:00.580353 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:51:00.581557 | orchestrator | Saturday 22 March 2025 22:51:00 +0000 (0:00:00.826) 0:00:49.192 ******** 2025-03-22 22:51:00.582289 | orchestrator | =============================================================================== 2025-03-22 22:51:00.582774 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.22s 2025-03-22 22:51:00.584599 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.74s 2025-03-22 22:51:00.585587 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.09s 2025-03-22 22:51:00.586150 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.48s 2025-03-22 22:51:00.586876 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.39s 2025-03-22 22:51:00.587401 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.37s 2025-03-22 22:51:00.587867 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.10s 2025-03-22 22:51:00.588086 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.89s 2025-03-22 22:51:00.588408 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.81s 2025-03-22 22:51:00.589339 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.79s 2025-03-22 22:51:00.590562 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.77s 2025-03-22 22:51:00.591141 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.61s 2025-03-22 22:51:00.592408 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.45s 2025-03-22 22:51:00.592867 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.37s 2025-03-22 22:51:00.593803 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.36s 2025-03-22 22:51:00.594428 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.33s 2025-03-22 22:51:00.594943 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.28s 2025-03-22 22:51:00.595459 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2025-03-22 22:51:00.595963 | orchestrator | osism.commons.network : Create required directories --------------------- 1.11s 2025-03-22 22:51:00.596892 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.00s 2025-03-22 22:51:01.313988 | orchestrator | + osism apply wireguard 2025-03-22 22:51:02.911503 | orchestrator | 2025-03-22 22:51:02 | INFO  | Task 2f6da460-7bf2-4f85-9261-ec134b1ad258 (wireguard) was prepared for execution. 2025-03-22 22:51:06.786653 | orchestrator | 2025-03-22 22:51:02 | INFO  | It takes a moment until task 2f6da460-7bf2-4f85-9261-ec134b1ad258 (wireguard) has been started and output is visible here. 2025-03-22 22:51:06.786785 | orchestrator | 2025-03-22 22:51:06.786887 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-03-22 22:51:06.786910 | orchestrator | 2025-03-22 22:51:06.787903 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-03-22 22:51:06.789197 | orchestrator | Saturday 22 March 2025 22:51:06 +0000 (0:00:00.212) 0:00:00.212 ******** 2025-03-22 22:51:08.573146 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:08.573908 | orchestrator | 2025-03-22 22:51:08.574618 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-03-22 22:51:08.575548 | orchestrator | Saturday 22 March 2025 22:51:08 +0000 (0:00:01.790) 0:00:02.002 ******** 2025-03-22 22:51:16.139137 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:16.139789 | orchestrator | 2025-03-22 22:51:16.141428 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-03-22 22:51:16.142312 | orchestrator | Saturday 22 March 2025 22:51:16 +0000 (0:00:07.564) 0:00:09.567 ******** 2025-03-22 22:51:16.814403 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:16.814564 | orchestrator | 2025-03-22 22:51:16.814787 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-03-22 22:51:16.815278 | orchestrator | Saturday 22 March 2025 22:51:16 +0000 (0:00:00.678) 0:00:10.246 ******** 2025-03-22 22:51:17.264731 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:17.265049 | orchestrator | 2025-03-22 22:51:17.266798 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-03-22 22:51:17.268176 | orchestrator | Saturday 22 March 2025 22:51:17 +0000 (0:00:00.448) 0:00:10.695 ******** 2025-03-22 22:51:17.859167 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:17.859724 | orchestrator | 2025-03-22 22:51:17.860027 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-03-22 22:51:17.860695 | orchestrator | Saturday 22 March 2025 22:51:17 +0000 (0:00:00.596) 0:00:11.291 ******** 2025-03-22 22:51:18.466593 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:18.468659 | orchestrator | 2025-03-22 22:51:18.469327 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-03-22 22:51:18.469363 | orchestrator | Saturday 22 March 2025 22:51:18 +0000 (0:00:00.605) 0:00:11.897 ******** 2025-03-22 22:51:18.913175 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:18.913598 | orchestrator | 2025-03-22 22:51:18.914348 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-03-22 22:51:18.915297 | orchestrator | Saturday 22 March 2025 22:51:18 +0000 (0:00:00.447) 0:00:12.344 ******** 2025-03-22 22:51:20.200029 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:20.200190 | orchestrator | 2025-03-22 22:51:20.200216 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-03-22 22:51:20.202080 | orchestrator | Saturday 22 March 2025 22:51:20 +0000 (0:00:01.280) 0:00:13.624 ******** 2025-03-22 22:51:21.219096 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-22 22:51:21.220982 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:21.221800 | orchestrator | 2025-03-22 22:51:21.223132 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-03-22 22:51:21.224525 | orchestrator | Saturday 22 March 2025 22:51:21 +0000 (0:00:01.023) 0:00:14.648 ******** 2025-03-22 22:51:23.172834 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:23.173467 | orchestrator | 2025-03-22 22:51:23.175514 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-03-22 22:51:24.130523 | orchestrator | Saturday 22 March 2025 22:51:23 +0000 (0:00:01.955) 0:00:16.603 ******** 2025-03-22 22:51:24.130667 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:24.131111 | orchestrator | 2025-03-22 22:51:24.131553 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:51:24.131942 | orchestrator | 2025-03-22 22:51:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:51:24.132545 | orchestrator | 2025-03-22 22:51:24 | INFO  | Please wait and do not abort execution. 2025-03-22 22:51:24.132593 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:51:24.133547 | orchestrator | 2025-03-22 22:51:24.134379 | orchestrator | 2025-03-22 22:51:24.134974 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:51:24.135313 | orchestrator | Saturday 22 March 2025 22:51:24 +0000 (0:00:00.958) 0:00:17.562 ******** 2025-03-22 22:51:24.135956 | orchestrator | =============================================================================== 2025-03-22 22:51:24.136857 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.56s 2025-03-22 22:51:24.137419 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.96s 2025-03-22 22:51:24.137721 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.79s 2025-03-22 22:51:24.138505 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.28s 2025-03-22 22:51:24.139205 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.02s 2025-03-22 22:51:24.139646 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-03-22 22:51:24.139914 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.68s 2025-03-22 22:51:24.140146 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.61s 2025-03-22 22:51:24.140299 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.60s 2025-03-22 22:51:24.140625 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-03-22 22:51:24.140880 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-03-22 22:51:24.847667 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-03-22 22:51:24.884349 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-03-22 22:51:24.960425 | orchestrator | Dload Upload Total Spent Left Speed 2025-03-22 22:51:24.960482 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2025-03-22 22:51:24.972050 | orchestrator | + osism apply --environment custom workarounds 2025-03-22 22:51:26.698332 | orchestrator | 2025-03-22 22:51:26 | INFO  | Trying to run play workarounds in environment custom 2025-03-22 22:51:26.750842 | orchestrator | 2025-03-22 22:51:26 | INFO  | Task daab61f3-f76b-4ae2-b4fa-177315034e01 (workarounds) was prepared for execution. 2025-03-22 22:51:30.589952 | orchestrator | 2025-03-22 22:51:26 | INFO  | It takes a moment until task daab61f3-f76b-4ae2-b4fa-177315034e01 (workarounds) has been started and output is visible here. 2025-03-22 22:51:30.590185 | orchestrator | 2025-03-22 22:51:30.590652 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-22 22:51:30.591163 | orchestrator | 2025-03-22 22:51:30.595672 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-03-22 22:51:30.776276 | orchestrator | Saturday 22 March 2025 22:51:30 +0000 (0:00:00.183) 0:00:00.183 ******** 2025-03-22 22:51:30.776388 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-03-22 22:51:30.887925 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-03-22 22:51:30.994914 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-03-22 22:51:31.096948 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-03-22 22:51:31.319783 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-03-22 22:51:31.503450 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-03-22 22:51:31.504299 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-03-22 22:51:31.504997 | orchestrator | 2025-03-22 22:51:31.505643 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-03-22 22:51:31.506364 | orchestrator | 2025-03-22 22:51:31.506889 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-22 22:51:31.507476 | orchestrator | Saturday 22 March 2025 22:51:31 +0000 (0:00:00.912) 0:00:01.096 ******** 2025-03-22 22:51:34.493406 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:34.494397 | orchestrator | 2025-03-22 22:51:34.500564 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-03-22 22:51:34.500791 | orchestrator | 2025-03-22 22:51:34.500822 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-22 22:51:34.501873 | orchestrator | Saturday 22 March 2025 22:51:34 +0000 (0:00:02.985) 0:00:04.081 ******** 2025-03-22 22:51:36.496432 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:51:36.498566 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:51:36.501262 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:51:36.501970 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:51:36.501997 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:51:36.502054 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:51:36.502351 | orchestrator | 2025-03-22 22:51:36.502632 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-03-22 22:51:36.503639 | orchestrator | 2025-03-22 22:51:36.503918 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-03-22 22:51:36.505369 | orchestrator | Saturday 22 March 2025 22:51:36 +0000 (0:00:02.010) 0:00:06.092 ******** 2025-03-22 22:51:38.147414 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-22 22:51:38.148088 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-22 22:51:38.150478 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-22 22:51:38.151778 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-22 22:51:38.152696 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-22 22:51:38.153650 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-22 22:51:38.154389 | orchestrator | 2025-03-22 22:51:38.155482 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-03-22 22:51:38.156272 | orchestrator | Saturday 22 March 2025 22:51:38 +0000 (0:00:01.645) 0:00:07.737 ******** 2025-03-22 22:51:41.708726 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:51:41.711111 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:51:41.713071 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:51:41.713125 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:51:41.713152 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:51:41.713185 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:51:41.713768 | orchestrator | 2025-03-22 22:51:41.713897 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-03-22 22:51:41.714557 | orchestrator | Saturday 22 March 2025 22:51:41 +0000 (0:00:03.564) 0:00:11.302 ******** 2025-03-22 22:51:41.877193 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:51:41.965713 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:51:42.070911 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:51:42.158502 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:51:42.515950 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:51:42.516656 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:51:42.516697 | orchestrator | 2025-03-22 22:51:42.516798 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-03-22 22:51:42.516981 | orchestrator | 2025-03-22 22:51:42.518299 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-03-22 22:51:42.521504 | orchestrator | Saturday 22 March 2025 22:51:42 +0000 (0:00:00.808) 0:00:12.111 ******** 2025-03-22 22:51:44.418978 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:44.419171 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:51:44.419674 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:51:44.420480 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:51:44.420929 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:51:44.421262 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:51:44.424577 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:51:46.198466 | orchestrator | 2025-03-22 22:51:46.198567 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-03-22 22:51:46.198585 | orchestrator | Saturday 22 March 2025 22:51:44 +0000 (0:00:01.904) 0:00:14.015 ******** 2025-03-22 22:51:46.198612 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:46.199118 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:51:46.200745 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:51:46.201133 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:51:46.201888 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:51:46.203260 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:51:46.204116 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:51:46.204220 | orchestrator | 2025-03-22 22:51:46.205408 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-03-22 22:51:46.205879 | orchestrator | Saturday 22 March 2025 22:51:46 +0000 (0:00:01.774) 0:00:15.790 ******** 2025-03-22 22:51:47.858625 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:51:47.859400 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:51:47.859433 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:51:47.859455 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:51:47.860424 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:47.861755 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:51:47.863344 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:51:47.863776 | orchestrator | 2025-03-22 22:51:47.864448 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-03-22 22:51:47.865287 | orchestrator | Saturday 22 March 2025 22:51:47 +0000 (0:00:01.660) 0:00:17.450 ******** 2025-03-22 22:51:49.791634 | orchestrator | changed: [testbed-manager] 2025-03-22 22:51:49.792479 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:51:49.796374 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:51:49.797581 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:51:49.797608 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:51:49.797623 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:51:49.797637 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:51:49.797685 | orchestrator | 2025-03-22 22:51:49.798342 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-03-22 22:51:49.798782 | orchestrator | Saturday 22 March 2025 22:51:49 +0000 (0:00:01.933) 0:00:19.384 ******** 2025-03-22 22:51:49.980577 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:51:50.060298 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:51:50.147516 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:51:50.258946 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:51:50.346667 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:51:50.490351 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:51:50.491613 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:51:50.492573 | orchestrator | 2025-03-22 22:51:50.493609 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-03-22 22:51:50.498186 | orchestrator | 2025-03-22 22:51:50.498796 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-03-22 22:51:50.498827 | orchestrator | Saturday 22 March 2025 22:51:50 +0000 (0:00:00.701) 0:00:20.085 ******** 2025-03-22 22:51:53.769990 | orchestrator | ok: [testbed-manager] 2025-03-22 22:51:53.770613 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:51:53.770821 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:51:53.771756 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:51:53.773124 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:51:53.773801 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:51:53.774650 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:51:53.775015 | orchestrator | 2025-03-22 22:51:53.775354 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:51:53.775787 | orchestrator | 2025-03-22 22:51:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:51:53.776022 | orchestrator | 2025-03-22 22:51:53 | INFO  | Please wait and do not abort execution. 2025-03-22 22:51:53.776519 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:51:53.776992 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:51:53.777356 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:51:53.777676 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:51:53.778103 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:51:53.778578 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:51:53.779110 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:51:53.779390 | orchestrator | 2025-03-22 22:51:53.780071 | orchestrator | 2025-03-22 22:51:53.780177 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:51:53.780636 | orchestrator | Saturday 22 March 2025 22:51:53 +0000 (0:00:03.279) 0:00:23.365 ******** 2025-03-22 22:51:53.781644 | orchestrator | =============================================================================== 2025-03-22 22:51:53.782382 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.56s 2025-03-22 22:51:53.783068 | orchestrator | Install python3-docker -------------------------------------------------- 3.28s 2025-03-22 22:51:53.783844 | orchestrator | Apply netplan configuration --------------------------------------------- 2.99s 2025-03-22 22:51:53.784499 | orchestrator | Apply netplan configuration --------------------------------------------- 2.01s 2025-03-22 22:51:53.784800 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.93s 2025-03-22 22:51:53.785434 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.90s 2025-03-22 22:51:53.785613 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.77s 2025-03-22 22:51:53.786110 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2025-03-22 22:51:53.786386 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.65s 2025-03-22 22:51:53.786832 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.91s 2025-03-22 22:51:53.787628 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.81s 2025-03-22 22:51:53.788590 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2025-03-22 22:51:54.497904 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-03-22 22:51:56.163834 | orchestrator | 2025-03-22 22:51:56 | INFO  | Task 0fa61828-65c5-48e7-9dae-bc4596cf6014 (reboot) was prepared for execution. 2025-03-22 22:51:59.765347 | orchestrator | 2025-03-22 22:51:56 | INFO  | It takes a moment until task 0fa61828-65c5-48e7-9dae-bc4596cf6014 (reboot) has been started and output is visible here. 2025-03-22 22:51:59.765480 | orchestrator | 2025-03-22 22:51:59.766059 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-22 22:51:59.766097 | orchestrator | 2025-03-22 22:51:59.767280 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-22 22:51:59.767695 | orchestrator | Saturday 22 March 2025 22:51:59 +0000 (0:00:00.168) 0:00:00.168 ******** 2025-03-22 22:51:59.863612 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:51:59.865361 | orchestrator | 2025-03-22 22:51:59.865844 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-22 22:51:59.866196 | orchestrator | Saturday 22 March 2025 22:51:59 +0000 (0:00:00.101) 0:00:00.270 ******** 2025-03-22 22:52:01.022480 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:52:01.022734 | orchestrator | 2025-03-22 22:52:01.023974 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-22 22:52:01.146655 | orchestrator | Saturday 22 March 2025 22:52:01 +0000 (0:00:01.155) 0:00:01.426 ******** 2025-03-22 22:52:01.146768 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:52:01.147505 | orchestrator | 2025-03-22 22:52:01.148434 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-22 22:52:01.149501 | orchestrator | 2025-03-22 22:52:01.150920 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-22 22:52:01.152226 | orchestrator | Saturday 22 March 2025 22:52:01 +0000 (0:00:00.124) 0:00:01.550 ******** 2025-03-22 22:52:01.264331 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:52:01.264678 | orchestrator | 2025-03-22 22:52:01.264796 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-22 22:52:01.267365 | orchestrator | Saturday 22 March 2025 22:52:01 +0000 (0:00:00.120) 0:00:01.671 ******** 2025-03-22 22:52:01.919605 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:52:01.920287 | orchestrator | 2025-03-22 22:52:01.921115 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-22 22:52:01.922979 | orchestrator | Saturday 22 March 2025 22:52:01 +0000 (0:00:00.653) 0:00:02.325 ******** 2025-03-22 22:52:02.045581 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:52:02.046636 | orchestrator | 2025-03-22 22:52:02.046676 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-22 22:52:02.046869 | orchestrator | 2025-03-22 22:52:02.047379 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-22 22:52:02.048969 | orchestrator | Saturday 22 March 2025 22:52:02 +0000 (0:00:00.124) 0:00:02.449 ******** 2025-03-22 22:52:02.297122 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:52:02.298659 | orchestrator | 2025-03-22 22:52:02.302715 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-22 22:52:02.302786 | orchestrator | Saturday 22 March 2025 22:52:02 +0000 (0:00:00.254) 0:00:02.703 ******** 2025-03-22 22:52:03.005118 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:52:03.005557 | orchestrator | 2025-03-22 22:52:03.005594 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-22 22:52:03.006059 | orchestrator | Saturday 22 March 2025 22:52:02 +0000 (0:00:00.702) 0:00:03.406 ******** 2025-03-22 22:52:03.122425 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:52:03.122550 | orchestrator | 2025-03-22 22:52:03.122982 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-22 22:52:03.123421 | orchestrator | 2025-03-22 22:52:03.124493 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-22 22:52:03.223456 | orchestrator | Saturday 22 March 2025 22:52:03 +0000 (0:00:00.120) 0:00:03.526 ******** 2025-03-22 22:52:03.223516 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:52:03.223593 | orchestrator | 2025-03-22 22:52:03.223614 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-22 22:52:03.224204 | orchestrator | Saturday 22 March 2025 22:52:03 +0000 (0:00:00.103) 0:00:03.630 ******** 2025-03-22 22:52:03.896975 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:52:03.897343 | orchestrator | 2025-03-22 22:52:03.900393 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-22 22:52:03.900703 | orchestrator | Saturday 22 March 2025 22:52:03 +0000 (0:00:00.671) 0:00:04.301 ******** 2025-03-22 22:52:04.025315 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:52:04.025855 | orchestrator | 2025-03-22 22:52:04.025887 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-22 22:52:04.025926 | orchestrator | 2025-03-22 22:52:04.026255 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-22 22:52:04.026769 | orchestrator | Saturday 22 March 2025 22:52:04 +0000 (0:00:00.125) 0:00:04.427 ******** 2025-03-22 22:52:04.140002 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:52:04.140858 | orchestrator | 2025-03-22 22:52:04.140890 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-22 22:52:04.141489 | orchestrator | Saturday 22 March 2025 22:52:04 +0000 (0:00:00.118) 0:00:04.546 ******** 2025-03-22 22:52:04.827553 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:52:04.827738 | orchestrator | 2025-03-22 22:52:04.829729 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-22 22:52:04.959051 | orchestrator | Saturday 22 March 2025 22:52:04 +0000 (0:00:00.687) 0:00:05.233 ******** 2025-03-22 22:52:04.959167 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:52:04.962369 | orchestrator | 2025-03-22 22:52:04.962463 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-22 22:52:04.963065 | orchestrator | 2025-03-22 22:52:04.966670 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-22 22:52:04.967686 | orchestrator | Saturday 22 March 2025 22:52:04 +0000 (0:00:00.128) 0:00:05.362 ******** 2025-03-22 22:52:05.069305 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:52:05.069505 | orchestrator | 2025-03-22 22:52:05.069535 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-22 22:52:05.070092 | orchestrator | Saturday 22 March 2025 22:52:05 +0000 (0:00:00.113) 0:00:05.475 ******** 2025-03-22 22:52:05.780364 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:52:05.780525 | orchestrator | 2025-03-22 22:52:05.780946 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-22 22:52:05.781368 | orchestrator | Saturday 22 March 2025 22:52:05 +0000 (0:00:00.711) 0:00:06.187 ******** 2025-03-22 22:52:05.809684 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:52:05.810301 | orchestrator | 2025-03-22 22:52:05.811205 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:52:05.811425 | orchestrator | 2025-03-22 22:52:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:52:05.811677 | orchestrator | 2025-03-22 22:52:05 | INFO  | Please wait and do not abort execution. 2025-03-22 22:52:05.812749 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:52:05.813064 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:52:05.813894 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:52:05.814927 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:52:05.815553 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:52:05.815827 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:52:05.816226 | orchestrator | 2025-03-22 22:52:05.816870 | orchestrator | 2025-03-22 22:52:05.817502 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:52:05.817874 | orchestrator | Saturday 22 March 2025 22:52:05 +0000 (0:00:00.030) 0:00:06.217 ******** 2025-03-22 22:52:05.818222 | orchestrator | =============================================================================== 2025-03-22 22:52:05.818626 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.58s 2025-03-22 22:52:05.818948 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2025-03-22 22:52:05.819219 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-03-22 22:52:06.482211 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-03-22 22:52:08.204954 | orchestrator | 2025-03-22 22:52:08 | INFO  | Task b7ab20dc-9137-477b-9cb4-32c37f37ac6d (wait-for-connection) was prepared for execution. 2025-03-22 22:52:11.990575 | orchestrator | 2025-03-22 22:52:08 | INFO  | It takes a moment until task b7ab20dc-9137-477b-9cb4-32c37f37ac6d (wait-for-connection) has been started and output is visible here. 2025-03-22 22:52:11.990722 | orchestrator | 2025-03-22 22:52:11.993445 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-03-22 22:52:11.995223 | orchestrator | 2025-03-22 22:52:11.995973 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-03-22 22:52:11.997253 | orchestrator | Saturday 22 March 2025 22:52:11 +0000 (0:00:00.212) 0:00:00.212 ******** 2025-03-22 22:52:24.175272 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:52:24.175472 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:52:24.178301 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:52:24.180119 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:52:24.180936 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:52:24.180967 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:52:24.181798 | orchestrator | 2025-03-22 22:52:24.182133 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:52:24.182167 | orchestrator | 2025-03-22 22:52:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:52:24.183343 | orchestrator | 2025-03-22 22:52:24 | INFO  | Please wait and do not abort execution. 2025-03-22 22:52:24.183605 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:24.184371 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:24.185225 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:24.185923 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:24.186298 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:24.187398 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:24.187860 | orchestrator | 2025-03-22 22:52:24.188188 | orchestrator | 2025-03-22 22:52:24.188967 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:52:24.189440 | orchestrator | Saturday 22 March 2025 22:52:24 +0000 (0:00:12.181) 0:00:12.394 ******** 2025-03-22 22:52:24.189873 | orchestrator | =============================================================================== 2025-03-22 22:52:24.190305 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.18s 2025-03-22 22:52:24.867623 | orchestrator | + osism apply hddtemp 2025-03-22 22:52:26.566521 | orchestrator | 2025-03-22 22:52:26 | INFO  | Task 2ba3626a-2320-4bb1-ab82-d7532e25404d (hddtemp) was prepared for execution. 2025-03-22 22:52:30.268225 | orchestrator | 2025-03-22 22:52:26 | INFO  | It takes a moment until task 2ba3626a-2320-4bb1-ab82-d7532e25404d (hddtemp) has been started and output is visible here. 2025-03-22 22:52:30.268415 | orchestrator | 2025-03-22 22:52:30.268495 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-03-22 22:52:30.268939 | orchestrator | 2025-03-22 22:52:30.269607 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-03-22 22:52:30.271390 | orchestrator | Saturday 22 March 2025 22:52:30 +0000 (0:00:00.241) 0:00:00.241 ******** 2025-03-22 22:52:30.442816 | orchestrator | ok: [testbed-manager] 2025-03-22 22:52:30.533769 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:52:30.621476 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:52:30.705163 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:52:30.794315 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:52:31.077443 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:52:31.077727 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:52:31.077758 | orchestrator | 2025-03-22 22:52:31.078853 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-03-22 22:52:31.079844 | orchestrator | Saturday 22 March 2025 22:52:31 +0000 (0:00:00.806) 0:00:01.048 ******** 2025-03-22 22:52:32.460981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:52:32.461502 | orchestrator | 2025-03-22 22:52:32.462362 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-03-22 22:52:32.463259 | orchestrator | Saturday 22 March 2025 22:52:32 +0000 (0:00:01.383) 0:00:02.431 ******** 2025-03-22 22:52:34.674804 | orchestrator | ok: [testbed-manager] 2025-03-22 22:52:34.676528 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:52:34.682391 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:52:34.682856 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:52:34.682886 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:52:34.682909 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:52:34.683626 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:52:34.684321 | orchestrator | 2025-03-22 22:52:34.684968 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-03-22 22:52:34.685402 | orchestrator | Saturday 22 March 2025 22:52:34 +0000 (0:00:02.217) 0:00:04.649 ******** 2025-03-22 22:52:35.335475 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:52:35.433213 | orchestrator | changed: [testbed-manager] 2025-03-22 22:52:35.559644 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:52:36.038764 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:52:36.038907 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:52:36.039995 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:52:36.040953 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:52:36.041586 | orchestrator | 2025-03-22 22:52:36.042333 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-03-22 22:52:36.042964 | orchestrator | Saturday 22 March 2025 22:52:36 +0000 (0:00:01.358) 0:00:06.007 ******** 2025-03-22 22:52:37.379911 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:52:37.380291 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:52:37.381072 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:52:37.381917 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:52:37.382853 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:52:37.383576 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:52:37.383930 | orchestrator | ok: [testbed-manager] 2025-03-22 22:52:37.384709 | orchestrator | 2025-03-22 22:52:37.385293 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-03-22 22:52:37.387896 | orchestrator | Saturday 22 March 2025 22:52:37 +0000 (0:00:01.342) 0:00:07.350 ******** 2025-03-22 22:52:37.869576 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:52:37.998494 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:52:38.090590 | orchestrator | changed: [testbed-manager] 2025-03-22 22:52:38.182740 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:52:38.317669 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:52:38.319316 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:52:38.320763 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:52:38.321598 | orchestrator | 2025-03-22 22:52:38.322142 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-03-22 22:52:38.322781 | orchestrator | Saturday 22 March 2025 22:52:38 +0000 (0:00:00.939) 0:00:08.290 ******** 2025-03-22 22:52:52.404312 | orchestrator | changed: [testbed-manager] 2025-03-22 22:52:52.404483 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:52:52.405085 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:52:52.405119 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:52:52.405131 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:52:52.405143 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:52:52.405163 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:52:52.405314 | orchestrator | 2025-03-22 22:52:52.406606 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-03-22 22:52:53.842481 | orchestrator | Saturday 22 March 2025 22:52:52 +0000 (0:00:14.078) 0:00:22.368 ******** 2025-03-22 22:52:53.842620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 22:52:53.842931 | orchestrator | 2025-03-22 22:52:53.843491 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-03-22 22:52:53.844076 | orchestrator | Saturday 22 March 2025 22:52:53 +0000 (0:00:01.445) 0:00:23.813 ******** 2025-03-22 22:52:56.140735 | orchestrator | changed: [testbed-node-2] 2025-03-22 22:52:56.141765 | orchestrator | changed: [testbed-node-1] 2025-03-22 22:52:56.143723 | orchestrator | changed: [testbed-manager] 2025-03-22 22:52:56.145300 | orchestrator | changed: [testbed-node-0] 2025-03-22 22:52:56.146673 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:52:56.149198 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:52:56.150356 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:52:56.151563 | orchestrator | 2025-03-22 22:52:56.152488 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:52:56.153454 | orchestrator | 2025-03-22 22:52:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:52:56.154219 | orchestrator | 2025-03-22 22:52:56 | INFO  | Please wait and do not abort execution. 2025-03-22 22:52:56.155404 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:52:56.156471 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:52:56.157126 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:52:56.157943 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:52:56.158971 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:52:56.159627 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:52:56.160515 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:52:56.161209 | orchestrator | 2025-03-22 22:52:56.162104 | orchestrator | 2025-03-22 22:52:56.162517 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:52:56.162921 | orchestrator | Saturday 22 March 2025 22:52:56 +0000 (0:00:02.301) 0:00:26.114 ******** 2025-03-22 22:52:56.163332 | orchestrator | =============================================================================== 2025-03-22 22:52:56.164577 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.08s 2025-03-22 22:52:56.164889 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.30s 2025-03-22 22:52:56.164912 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.22s 2025-03-22 22:52:56.164929 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.45s 2025-03-22 22:52:56.165329 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.38s 2025-03-22 22:52:56.165779 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.36s 2025-03-22 22:52:56.166278 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.34s 2025-03-22 22:52:56.166992 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.94s 2025-03-22 22:52:56.167415 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.81s 2025-03-22 22:52:56.880673 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-03-22 22:52:58.423066 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-22 22:52:58.423495 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-22 22:52:58.423531 | orchestrator | + local max_attempts=60 2025-03-22 22:52:58.423547 | orchestrator | + local name=ceph-ansible 2025-03-22 22:52:58.423561 | orchestrator | + local attempt_num=1 2025-03-22 22:52:58.423583 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-22 22:52:58.454112 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:52:58.455150 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-22 22:52:58.455175 | orchestrator | + local max_attempts=60 2025-03-22 22:52:58.455189 | orchestrator | + local name=kolla-ansible 2025-03-22 22:52:58.455203 | orchestrator | + local attempt_num=1 2025-03-22 22:52:58.455221 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-22 22:52:58.487643 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:52:58.488467 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-22 22:52:58.488493 | orchestrator | + local max_attempts=60 2025-03-22 22:52:58.488509 | orchestrator | + local name=osism-ansible 2025-03-22 22:52:58.488525 | orchestrator | + local attempt_num=1 2025-03-22 22:52:58.488544 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-22 22:52:58.521671 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-22 22:52:58.942539 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-22 22:52:58.942637 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-22 22:52:58.942694 | orchestrator | ARA in ceph-ansible already disabled. 2025-03-22 22:52:59.374674 | orchestrator | ARA in kolla-ansible already disabled. 2025-03-22 22:52:59.714559 | orchestrator | ARA in osism-ansible already disabled. 2025-03-22 22:53:00.108352 | orchestrator | ARA in osism-kubernetes already disabled. 2025-03-22 22:53:00.109134 | orchestrator | + osism apply gather-facts 2025-03-22 22:53:01.923583 | orchestrator | 2025-03-22 22:53:01 | INFO  | Task 9242eaae-a21c-4351-a69d-28a7ad50b482 (gather-facts) was prepared for execution. 2025-03-22 22:53:01.923839 | orchestrator | 2025-03-22 22:53:01 | INFO  | It takes a moment until task 9242eaae-a21c-4351-a69d-28a7ad50b482 (gather-facts) has been started and output is visible here. 2025-03-22 22:53:05.646809 | orchestrator | 2025-03-22 22:53:05.647129 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-22 22:53:05.647178 | orchestrator | 2025-03-22 22:53:05.651727 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-22 22:53:10.907091 | orchestrator | Saturday 22 March 2025 22:53:05 +0000 (0:00:00.195) 0:00:00.195 ******** 2025-03-22 22:53:10.907283 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:53:10.907825 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:53:10.911532 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:53:10.912221 | orchestrator | ok: [testbed-manager] 2025-03-22 22:53:10.912274 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:53:10.912290 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:53:10.912310 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:53:10.912996 | orchestrator | 2025-03-22 22:53:10.914166 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-22 22:53:10.914983 | orchestrator | 2025-03-22 22:53:10.915454 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-22 22:53:10.916048 | orchestrator | Saturday 22 March 2025 22:53:10 +0000 (0:00:05.261) 0:00:05.457 ******** 2025-03-22 22:53:11.077854 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:53:11.161865 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:53:11.235070 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:53:11.329068 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:53:11.422788 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:53:11.461901 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:53:11.462898 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:53:11.463755 | orchestrator | 2025-03-22 22:53:11.465111 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:53:11.466157 | orchestrator | 2025-03-22 22:53:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:53:11.466567 | orchestrator | 2025-03-22 22:53:11 | INFO  | Please wait and do not abort execution. 2025-03-22 22:53:11.466600 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.467325 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.467935 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.468312 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.469329 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.469707 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.470404 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-22 22:53:11.470757 | orchestrator | 2025-03-22 22:53:11.471209 | orchestrator | 2025-03-22 22:53:11.472068 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:53:11.472540 | orchestrator | Saturday 22 March 2025 22:53:11 +0000 (0:00:00.556) 0:00:06.013 ******** 2025-03-22 22:53:11.472967 | orchestrator | =============================================================================== 2025-03-22 22:53:11.473659 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.26s 2025-03-22 22:53:11.473863 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-03-22 22:53:12.153823 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-03-22 22:53:12.166753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-03-22 22:53:12.183368 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-03-22 22:53:12.197676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-03-22 22:53:12.214200 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-03-22 22:53:12.229989 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-03-22 22:53:12.246076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-03-22 22:53:12.261503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-03-22 22:53:12.276599 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-03-22 22:53:12.289610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-03-22 22:53:12.303377 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-03-22 22:53:12.315709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-03-22 22:53:12.329476 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-03-22 22:53:12.345944 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-03-22 22:53:12.362858 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-03-22 22:53:12.382830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-03-22 22:53:12.400527 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-03-22 22:53:12.418330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-03-22 22:53:12.435439 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-03-22 22:53:12.450215 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-03-22 22:53:12.465458 | orchestrator | + [[ false == \t\r\u\e ]] 2025-03-22 22:53:12.588245 | orchestrator | changed 2025-03-22 22:53:12.662977 | 2025-03-22 22:53:12.663151 | TASK [Deploy services] 2025-03-22 22:53:12.801363 | orchestrator | skipping: Conditional result was False 2025-03-22 22:53:12.811791 | 2025-03-22 22:53:12.811896 | TASK [Deploy in a nutshell] 2025-03-22 22:53:13.501058 | orchestrator | 2025-03-22 22:53:13.501226 | orchestrator | # PULL IMAGES 2025-03-22 22:53:13.501272 | orchestrator | 2025-03-22 22:53:13.501287 | orchestrator | + set -e 2025-03-22 22:53:13.501333 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-22 22:53:13.501353 | orchestrator | ++ export INTERACTIVE=false 2025-03-22 22:53:13.501368 | orchestrator | ++ INTERACTIVE=false 2025-03-22 22:53:13.501389 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-22 22:53:13.501410 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-22 22:53:13.501424 | orchestrator | + source /opt/manager-vars.sh 2025-03-22 22:53:13.501436 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-22 22:53:13.501449 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-22 22:53:13.501461 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-22 22:53:13.501473 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-22 22:53:13.501485 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-22 22:53:13.501498 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-22 22:53:13.501511 | orchestrator | ++ export MANAGER_VERSION=latest 2025-03-22 22:53:13.501523 | orchestrator | ++ MANAGER_VERSION=latest 2025-03-22 22:53:13.501536 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-22 22:53:13.501548 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-22 22:53:13.501560 | orchestrator | ++ export ARA=false 2025-03-22 22:53:13.501572 | orchestrator | ++ ARA=false 2025-03-22 22:53:13.501585 | orchestrator | ++ export TEMPEST=false 2025-03-22 22:53:13.501597 | orchestrator | ++ TEMPEST=false 2025-03-22 22:53:13.501610 | orchestrator | ++ export IS_ZUUL=true 2025-03-22 22:53:13.501622 | orchestrator | ++ IS_ZUUL=true 2025-03-22 22:53:13.501634 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:53:13.501647 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.215 2025-03-22 22:53:13.501659 | orchestrator | ++ export EXTERNAL_API=false 2025-03-22 22:53:13.501672 | orchestrator | ++ EXTERNAL_API=false 2025-03-22 22:53:13.501684 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-22 22:53:13.501696 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-22 22:53:13.501715 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-22 22:53:13.501728 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-22 22:53:13.501740 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-22 22:53:13.501752 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-22 22:53:13.501764 | orchestrator | + echo 2025-03-22 22:53:13.501776 | orchestrator | + echo '# PULL IMAGES' 2025-03-22 22:53:13.501788 | orchestrator | + echo 2025-03-22 22:53:13.501808 | orchestrator | ++ semver latest 7.0.0 2025-03-22 22:53:13.556906 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-22 22:53:15.145182 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-22 22:53:15.145290 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-03-22 22:53:15.145333 | orchestrator | 2025-03-22 22:53:15 | INFO  | Trying to run play pull-images in environment custom 2025-03-22 22:53:15.195803 | orchestrator | 2025-03-22 22:53:15 | INFO  | Task 888532c1-d008-409a-866c-90c52a646c93 (pull-images) was prepared for execution. 2025-03-22 22:53:18.992115 | orchestrator | 2025-03-22 22:53:15 | INFO  | It takes a moment until task 888532c1-d008-409a-866c-90c52a646c93 (pull-images) has been started and output is visible here. 2025-03-22 22:53:18.992222 | orchestrator | 2025-03-22 22:53:18.992515 | orchestrator | PLAY [Pull images] ************************************************************* 2025-03-22 22:53:18.993338 | orchestrator | 2025-03-22 22:53:18.994193 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-03-22 22:53:18.995166 | orchestrator | Saturday 22 March 2025 22:53:18 +0000 (0:00:00.172) 0:00:00.172 ******** 2025-03-22 22:53:53.567651 | orchestrator | changed: [testbed-manager] 2025-03-22 22:53:53.568271 | orchestrator | 2025-03-22 22:53:53.568304 | orchestrator | TASK [Pull other images] ******************************************************* 2025-03-22 22:53:53.568328 | orchestrator | Saturday 22 March 2025 22:53:53 +0000 (0:00:34.578) 0:00:34.750 ******** 2025-03-22 22:54:48.843175 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-03-22 22:54:48.843442 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-03-22 22:54:48.843473 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-03-22 22:54:48.843501 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-03-22 22:54:48.843527 | orchestrator | changed: [testbed-manager] => (item=common) 2025-03-22 22:54:48.843549 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-03-22 22:54:48.844002 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-03-22 22:54:48.844062 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-03-22 22:54:48.844453 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-03-22 22:54:48.845358 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-03-22 22:54:48.846105 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-03-22 22:54:48.846692 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-03-22 22:54:48.846721 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-03-22 22:54:48.848601 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-03-22 22:54:48.848629 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-03-22 22:54:48.851316 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-03-22 22:54:48.851343 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-03-22 22:54:48.851357 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-03-22 22:54:48.851377 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-03-22 22:54:48.852058 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-03-22 22:54:48.854089 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-03-22 22:54:48.854805 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-03-22 22:54:48.854834 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-03-22 22:54:48.859379 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-03-22 22:54:48.860137 | orchestrator | 2025-03-22 22:54:48.860165 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:54:48.860183 | orchestrator | 2025-03-22 22:54:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:54:48.860200 | orchestrator | 2025-03-22 22:54:48 | INFO  | Please wait and do not abort execution. 2025-03-22 22:54:48.860222 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 22:54:48.861557 | orchestrator | 2025-03-22 22:54:48.862472 | orchestrator | 2025-03-22 22:54:48.863971 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:54:48.864267 | orchestrator | Saturday 22 March 2025 22:54:48 +0000 (0:00:55.275) 0:01:30.025 ******** 2025-03-22 22:54:48.864725 | orchestrator | =============================================================================== 2025-03-22 22:54:48.865063 | orchestrator | Pull other images ------------------------------------------------------ 55.28s 2025-03-22 22:54:48.865720 | orchestrator | Pull keystone image ---------------------------------------------------- 34.58s 2025-03-22 22:54:51.553914 | orchestrator | 2025-03-22 22:54:51 | INFO  | Trying to run play wipe-partitions in environment custom 2025-03-22 22:54:51.608311 | orchestrator | 2025-03-22 22:54:51 | INFO  | Task 37ab96d1-94a4-47fc-981e-408b99629b57 (wipe-partitions) was prepared for execution. 2025-03-22 22:54:55.702956 | orchestrator | 2025-03-22 22:54:51 | INFO  | It takes a moment until task 37ab96d1-94a4-47fc-981e-408b99629b57 (wipe-partitions) has been started and output is visible here. 2025-03-22 22:54:55.703096 | orchestrator | 2025-03-22 22:54:55.703913 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-03-22 22:54:55.707607 | orchestrator | 2025-03-22 22:54:55.708595 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-03-22 22:54:55.709187 | orchestrator | Saturday 22 March 2025 22:54:55 +0000 (0:00:00.153) 0:00:00.154 ******** 2025-03-22 22:54:56.413373 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:54:56.413541 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:54:56.413571 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:54:56.417613 | orchestrator | 2025-03-22 22:54:56.420059 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-03-22 22:54:56.420091 | orchestrator | Saturday 22 March 2025 22:54:56 +0000 (0:00:00.713) 0:00:00.867 ******** 2025-03-22 22:54:56.667106 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:54:56.792342 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:54:56.792904 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:54:56.792936 | orchestrator | 2025-03-22 22:54:56.792957 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-03-22 22:54:56.796463 | orchestrator | Saturday 22 March 2025 22:54:56 +0000 (0:00:00.375) 0:00:01.243 ******** 2025-03-22 22:54:57.685291 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:54:57.859645 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:54:57.859770 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:54:57.859789 | orchestrator | 2025-03-22 22:54:57.860430 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-03-22 22:54:57.860455 | orchestrator | Saturday 22 March 2025 22:54:57 +0000 (0:00:00.896) 0:00:02.139 ******** 2025-03-22 22:54:57.860487 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:54:58.025926 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:54:59.417970 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:54:59.418125 | orchestrator | 2025-03-22 22:54:59.418146 | orchestrator | TASK [Check device availability] *********************************************** 2025-03-22 22:54:59.418181 | orchestrator | Saturday 22 March 2025 22:54:58 +0000 (0:00:00.344) 0:00:02.484 ******** 2025-03-22 22:54:59.418211 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-22 22:54:59.418338 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-22 22:54:59.418443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-22 22:54:59.418853 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-22 22:54:59.419187 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-22 22:54:59.419587 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-22 22:54:59.419946 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-22 22:54:59.420370 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-22 22:54:59.420592 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-22 22:54:59.420976 | orchestrator | 2025-03-22 22:54:59.421341 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-03-22 22:54:59.421712 | orchestrator | Saturday 22 March 2025 22:54:59 +0000 (0:00:01.386) 0:00:03.870 ******** 2025-03-22 22:55:00.896759 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-03-22 22:55:00.897049 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-03-22 22:55:00.897086 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-03-22 22:55:00.897349 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-03-22 22:55:00.897771 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-03-22 22:55:00.898102 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-03-22 22:55:00.898554 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-03-22 22:55:00.898973 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-03-22 22:55:00.899705 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-03-22 22:55:00.900050 | orchestrator | 2025-03-22 22:55:00.900401 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-03-22 22:55:00.900820 | orchestrator | Saturday 22 March 2025 22:55:00 +0000 (0:00:01.482) 0:00:05.353 ******** 2025-03-22 22:55:03.345630 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-22 22:55:03.346442 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-22 22:55:03.346492 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-22 22:55:03.347712 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-22 22:55:03.349026 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-22 22:55:03.352443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-22 22:55:03.352742 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-22 22:55:03.354586 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-22 22:55:03.354881 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-22 22:55:03.358678 | orchestrator | 2025-03-22 22:55:04.185913 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-03-22 22:55:04.186091 | orchestrator | Saturday 22 March 2025 22:55:03 +0000 (0:00:02.449) 0:00:07.803 ******** 2025-03-22 22:55:04.186131 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:55:04.186467 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:55:04.187085 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:55:04.187659 | orchestrator | 2025-03-22 22:55:04.188368 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-03-22 22:55:04.189389 | orchestrator | Saturday 22 March 2025 22:55:04 +0000 (0:00:00.835) 0:00:08.639 ******** 2025-03-22 22:55:04.924171 | orchestrator | changed: [testbed-node-3] 2025-03-22 22:55:04.925313 | orchestrator | changed: [testbed-node-4] 2025-03-22 22:55:04.928446 | orchestrator | changed: [testbed-node-5] 2025-03-22 22:55:04.929420 | orchestrator | 2025-03-22 22:55:04.934392 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:55:04.935534 | orchestrator | 2025-03-22 22:55:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:55:04.935561 | orchestrator | 2025-03-22 22:55:04 | INFO  | Please wait and do not abort execution. 2025-03-22 22:55:04.935585 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:04.936629 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:04.937806 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:04.938683 | orchestrator | 2025-03-22 22:55:04.939611 | orchestrator | 2025-03-22 22:55:04.940177 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:55:04.940889 | orchestrator | Saturday 22 March 2025 22:55:04 +0000 (0:00:00.739) 0:00:09.378 ******** 2025-03-22 22:55:04.941818 | orchestrator | =============================================================================== 2025-03-22 22:55:04.942517 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.45s 2025-03-22 22:55:04.943365 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.48s 2025-03-22 22:55:04.944314 | orchestrator | Check device availability ----------------------------------------------- 1.39s 2025-03-22 22:55:04.945168 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.90s 2025-03-22 22:55:04.945593 | orchestrator | Reload udev rules ------------------------------------------------------- 0.84s 2025-03-22 22:55:04.946391 | orchestrator | Request device events from the kernel ----------------------------------- 0.74s 2025-03-22 22:55:04.946723 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.71s 2025-03-22 22:55:04.947545 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2025-03-22 22:55:04.948215 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.34s 2025-03-22 22:55:07.497648 | orchestrator | 2025-03-22 22:55:07 | INFO  | Task d31fc079-fdfe-4d75-8015-0ef046154da4 (facts) was prepared for execution. 2025-03-22 22:55:11.537310 | orchestrator | 2025-03-22 22:55:07 | INFO  | It takes a moment until task d31fc079-fdfe-4d75-8015-0ef046154da4 (facts) has been started and output is visible here. 2025-03-22 22:55:11.537443 | orchestrator | 2025-03-22 22:55:11.539321 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-22 22:55:11.539957 | orchestrator | 2025-03-22 22:55:11.540906 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-22 22:55:11.542714 | orchestrator | Saturday 22 March 2025 22:55:11 +0000 (0:00:00.253) 0:00:00.253 ******** 2025-03-22 22:55:12.791715 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:55:12.792132 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:55:12.792686 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:55:12.793526 | orchestrator | ok: [testbed-manager] 2025-03-22 22:55:12.797117 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:55:12.797424 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:55:12.798288 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:55:12.799008 | orchestrator | 2025-03-22 22:55:12.799757 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-22 22:55:12.799955 | orchestrator | Saturday 22 March 2025 22:55:12 +0000 (0:00:01.257) 0:00:01.510 ******** 2025-03-22 22:55:12.978128 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:55:13.083584 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:55:13.181399 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:55:13.272159 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:55:13.358376 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:14.038577 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:14.038882 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:55:14.040068 | orchestrator | 2025-03-22 22:55:14.040510 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-22 22:55:14.041386 | orchestrator | 2025-03-22 22:55:14.041799 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-22 22:55:14.041972 | orchestrator | Saturday 22 March 2025 22:55:14 +0000 (0:00:01.247) 0:00:02.757 ******** 2025-03-22 22:55:18.687473 | orchestrator | ok: [testbed-node-2] 2025-03-22 22:55:18.688074 | orchestrator | ok: [testbed-node-1] 2025-03-22 22:55:18.692493 | orchestrator | ok: [testbed-node-0] 2025-03-22 22:55:18.694145 | orchestrator | ok: [testbed-manager] 2025-03-22 22:55:18.694995 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:55:18.696191 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:55:18.697312 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:55:18.700982 | orchestrator | 2025-03-22 22:55:18.977513 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-22 22:55:18.977592 | orchestrator | 2025-03-22 22:55:18.977621 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-22 22:55:18.977637 | orchestrator | Saturday 22 March 2025 22:55:18 +0000 (0:00:04.653) 0:00:07.410 ******** 2025-03-22 22:55:18.977665 | orchestrator | skipping: [testbed-manager] 2025-03-22 22:55:19.082227 | orchestrator | skipping: [testbed-node-0] 2025-03-22 22:55:19.189576 | orchestrator | skipping: [testbed-node-1] 2025-03-22 22:55:19.290214 | orchestrator | skipping: [testbed-node-2] 2025-03-22 22:55:19.369477 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:19.414143 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:19.414746 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:55:19.415214 | orchestrator | 2025-03-22 22:55:19.415636 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:55:19.416003 | orchestrator | 2025-03-22 22:55:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:55:19.417552 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.417865 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.417891 | orchestrator | 2025-03-22 22:55:19 | INFO  | Please wait and do not abort execution. 2025-03-22 22:55:19.417911 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.418227 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.418647 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.419332 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.419649 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 22:55:19.420307 | orchestrator | 2025-03-22 22:55:19.420958 | orchestrator | 2025-03-22 22:55:19.421848 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:55:19.422494 | orchestrator | Saturday 22 March 2025 22:55:19 +0000 (0:00:00.727) 0:00:08.138 ******** 2025-03-22 22:55:19.424378 | orchestrator | =============================================================================== 2025-03-22 22:55:19.424828 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2025-03-22 22:55:19.425597 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2025-03-22 22:55:19.426431 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-03-22 22:55:19.427161 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.73s 2025-03-22 22:55:21.811367 | orchestrator | 2025-03-22 22:55:21 | INFO  | Task 82484d34-1380-4c1c-968f-80de6ba3d49a (ceph-configure-lvm-volumes) was prepared for execution. 2025-03-22 22:55:26.247921 | orchestrator | 2025-03-22 22:55:21 | INFO  | It takes a moment until task 82484d34-1380-4c1c-968f-80de6ba3d49a (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-03-22 22:55:26.248066 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-22 22:55:26.998066 | orchestrator | 2025-03-22 22:55:26.998522 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-22 22:55:26.998890 | orchestrator | 2025-03-22 22:55:27.001226 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-22 22:55:27.001896 | orchestrator | Saturday 22 March 2025 22:55:26 +0000 (0:00:00.620) 0:00:00.620 ******** 2025-03-22 22:55:27.279754 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-22 22:55:27.280636 | orchestrator | 2025-03-22 22:55:27.281580 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-22 22:55:27.282366 | orchestrator | Saturday 22 March 2025 22:55:27 +0000 (0:00:00.283) 0:00:00.903 ******** 2025-03-22 22:55:27.547225 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:55:27.548200 | orchestrator | 2025-03-22 22:55:27.551984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:27.552454 | orchestrator | Saturday 22 March 2025 22:55:27 +0000 (0:00:00.267) 0:00:01.170 ******** 2025-03-22 22:55:28.316627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-22 22:55:28.317382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-22 22:55:28.319896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-22 22:55:28.321269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-22 22:55:28.322711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-22 22:55:28.323652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-22 22:55:28.323756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-22 22:55:28.324101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-22 22:55:28.324532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-22 22:55:28.325179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-22 22:55:28.326002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-22 22:55:28.326570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-22 22:55:28.329409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-22 22:55:28.329448 | orchestrator | 2025-03-22 22:55:28.543882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:28.543966 | orchestrator | Saturday 22 March 2025 22:55:28 +0000 (0:00:00.768) 0:00:01.939 ******** 2025-03-22 22:55:28.543995 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:28.546066 | orchestrator | 2025-03-22 22:55:28.547472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:28.547504 | orchestrator | Saturday 22 March 2025 22:55:28 +0000 (0:00:00.230) 0:00:02.169 ******** 2025-03-22 22:55:28.789110 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:28.789856 | orchestrator | 2025-03-22 22:55:28.789897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:28.791047 | orchestrator | Saturday 22 March 2025 22:55:28 +0000 (0:00:00.240) 0:00:02.410 ******** 2025-03-22 22:55:29.005430 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:29.005849 | orchestrator | 2025-03-22 22:55:29.007905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:29.008923 | orchestrator | Saturday 22 March 2025 22:55:29 +0000 (0:00:00.220) 0:00:02.630 ******** 2025-03-22 22:55:29.306989 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:29.308587 | orchestrator | 2025-03-22 22:55:29.311511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:29.312860 | orchestrator | Saturday 22 March 2025 22:55:29 +0000 (0:00:00.302) 0:00:02.932 ******** 2025-03-22 22:55:29.564930 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:29.565444 | orchestrator | 2025-03-22 22:55:29.565827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:29.566697 | orchestrator | Saturday 22 March 2025 22:55:29 +0000 (0:00:00.256) 0:00:03.188 ******** 2025-03-22 22:55:29.821647 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:29.823015 | orchestrator | 2025-03-22 22:55:29.825114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:29.825147 | orchestrator | Saturday 22 March 2025 22:55:29 +0000 (0:00:00.258) 0:00:03.447 ******** 2025-03-22 22:55:30.067528 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:30.071091 | orchestrator | 2025-03-22 22:55:30.071199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:30.074752 | orchestrator | Saturday 22 March 2025 22:55:30 +0000 (0:00:00.245) 0:00:03.693 ******** 2025-03-22 22:55:30.297046 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:30.298934 | orchestrator | 2025-03-22 22:55:30.301880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:30.302784 | orchestrator | Saturday 22 March 2025 22:55:30 +0000 (0:00:00.230) 0:00:03.923 ******** 2025-03-22 22:55:31.327692 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d944c393-c469-4703-9a84-253eb786ae38) 2025-03-22 22:55:31.329912 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d944c393-c469-4703-9a84-253eb786ae38) 2025-03-22 22:55:31.334215 | orchestrator | 2025-03-22 22:55:31.334314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:31.335411 | orchestrator | Saturday 22 March 2025 22:55:31 +0000 (0:00:01.030) 0:00:04.953 ******** 2025-03-22 22:55:31.871264 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_873c6414-afc7-40f1-8cf8-9106a041fae2) 2025-03-22 22:55:31.876167 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_873c6414-afc7-40f1-8cf8-9106a041fae2) 2025-03-22 22:55:31.876943 | orchestrator | 2025-03-22 22:55:31.878863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:31.882055 | orchestrator | Saturday 22 March 2025 22:55:31 +0000 (0:00:00.539) 0:00:05.493 ******** 2025-03-22 22:55:32.439695 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34749356-9908-4430-b6a3-abe4e540ecc5) 2025-03-22 22:55:32.441364 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34749356-9908-4430-b6a3-abe4e540ecc5) 2025-03-22 22:55:32.441449 | orchestrator | 2025-03-22 22:55:32.441846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:32.443857 | orchestrator | Saturday 22 March 2025 22:55:32 +0000 (0:00:00.571) 0:00:06.064 ******** 2025-03-22 22:55:32.949721 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_708adc92-3837-440f-909c-446edf0d18e7) 2025-03-22 22:55:32.951521 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_708adc92-3837-440f-909c-446edf0d18e7) 2025-03-22 22:55:32.953364 | orchestrator | 2025-03-22 22:55:32.955037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:32.956952 | orchestrator | Saturday 22 March 2025 22:55:32 +0000 (0:00:00.508) 0:00:06.572 ******** 2025-03-22 22:55:33.364765 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-22 22:55:33.364907 | orchestrator | 2025-03-22 22:55:33.368085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:33.853693 | orchestrator | Saturday 22 March 2025 22:55:33 +0000 (0:00:00.417) 0:00:06.990 ******** 2025-03-22 22:55:33.853788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-22 22:55:33.855720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-22 22:55:33.857586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-22 22:55:33.857685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-22 22:55:33.858441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-22 22:55:33.861005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-22 22:55:33.862808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-22 22:55:33.863982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-22 22:55:33.865339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-22 22:55:33.866181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-22 22:55:33.866863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-22 22:55:33.868001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-22 22:55:33.868132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-22 22:55:33.868160 | orchestrator | 2025-03-22 22:55:33.868934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:33.869743 | orchestrator | Saturday 22 March 2025 22:55:33 +0000 (0:00:00.486) 0:00:07.476 ******** 2025-03-22 22:55:34.117539 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:34.117673 | orchestrator | 2025-03-22 22:55:34.118193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:34.118480 | orchestrator | Saturday 22 March 2025 22:55:34 +0000 (0:00:00.260) 0:00:07.737 ******** 2025-03-22 22:55:34.373615 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:34.374949 | orchestrator | 2025-03-22 22:55:34.376426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:34.564707 | orchestrator | Saturday 22 March 2025 22:55:34 +0000 (0:00:00.261) 0:00:07.998 ******** 2025-03-22 22:55:34.564771 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:34.565787 | orchestrator | 2025-03-22 22:55:34.567516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:34.569278 | orchestrator | Saturday 22 March 2025 22:55:34 +0000 (0:00:00.191) 0:00:08.189 ******** 2025-03-22 22:55:34.805444 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:34.806661 | orchestrator | 2025-03-22 22:55:34.807672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:34.807704 | orchestrator | Saturday 22 March 2025 22:55:34 +0000 (0:00:00.240) 0:00:08.430 ******** 2025-03-22 22:55:35.344505 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:35.345394 | orchestrator | 2025-03-22 22:55:35.346353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:35.347446 | orchestrator | Saturday 22 March 2025 22:55:35 +0000 (0:00:00.539) 0:00:08.969 ******** 2025-03-22 22:55:35.595458 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:35.596081 | orchestrator | 2025-03-22 22:55:35.597214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:35.597337 | orchestrator | Saturday 22 March 2025 22:55:35 +0000 (0:00:00.250) 0:00:09.220 ******** 2025-03-22 22:55:35.800517 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:35.801455 | orchestrator | 2025-03-22 22:55:35.801495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:35.803225 | orchestrator | Saturday 22 March 2025 22:55:35 +0000 (0:00:00.205) 0:00:09.425 ******** 2025-03-22 22:55:36.003527 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:36.003663 | orchestrator | 2025-03-22 22:55:36.003687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:36.004995 | orchestrator | Saturday 22 March 2025 22:55:35 +0000 (0:00:00.201) 0:00:09.627 ******** 2025-03-22 22:55:36.765287 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-22 22:55:36.767084 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-22 22:55:36.767840 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-22 22:55:36.769064 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-22 22:55:36.769416 | orchestrator | 2025-03-22 22:55:36.770179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:36.770713 | orchestrator | Saturday 22 March 2025 22:55:36 +0000 (0:00:00.763) 0:00:10.390 ******** 2025-03-22 22:55:36.983345 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:37.226851 | orchestrator | 2025-03-22 22:55:37.226908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:37.226925 | orchestrator | Saturday 22 March 2025 22:55:36 +0000 (0:00:00.215) 0:00:10.606 ******** 2025-03-22 22:55:37.226950 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:37.227214 | orchestrator | 2025-03-22 22:55:37.227689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:37.228027 | orchestrator | Saturday 22 March 2025 22:55:37 +0000 (0:00:00.244) 0:00:10.851 ******** 2025-03-22 22:55:37.473390 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:37.473525 | orchestrator | 2025-03-22 22:55:37.477155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:37.477226 | orchestrator | Saturday 22 March 2025 22:55:37 +0000 (0:00:00.245) 0:00:11.096 ******** 2025-03-22 22:55:37.672653 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:37.673486 | orchestrator | 2025-03-22 22:55:37.676332 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-22 22:55:37.678668 | orchestrator | Saturday 22 March 2025 22:55:37 +0000 (0:00:00.202) 0:00:11.299 ******** 2025-03-22 22:55:37.897051 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-03-22 22:55:37.897419 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-03-22 22:55:37.897704 | orchestrator | 2025-03-22 22:55:37.898647 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-22 22:55:37.899544 | orchestrator | Saturday 22 March 2025 22:55:37 +0000 (0:00:00.223) 0:00:11.522 ******** 2025-03-22 22:55:38.281543 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:38.283463 | orchestrator | 2025-03-22 22:55:38.283712 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-22 22:55:38.288353 | orchestrator | Saturday 22 March 2025 22:55:38 +0000 (0:00:00.380) 0:00:11.903 ******** 2025-03-22 22:55:38.448152 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:38.448625 | orchestrator | 2025-03-22 22:55:38.449046 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-22 22:55:38.449348 | orchestrator | Saturday 22 March 2025 22:55:38 +0000 (0:00:00.166) 0:00:12.069 ******** 2025-03-22 22:55:38.621095 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:38.621327 | orchestrator | 2025-03-22 22:55:38.621462 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-22 22:55:38.622335 | orchestrator | Saturday 22 March 2025 22:55:38 +0000 (0:00:00.175) 0:00:12.245 ******** 2025-03-22 22:55:38.779474 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:55:38.779650 | orchestrator | 2025-03-22 22:55:38.780026 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-22 22:55:38.783018 | orchestrator | Saturday 22 March 2025 22:55:38 +0000 (0:00:00.156) 0:00:12.401 ******** 2025-03-22 22:55:38.984153 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4729f66e-933a-5d14-9b0e-268b64ee2b75'}}) 2025-03-22 22:55:38.985143 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a6484f2-0da7-5943-9f10-427ab04c9a45'}}) 2025-03-22 22:55:38.988031 | orchestrator | 2025-03-22 22:55:38.992774 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-22 22:55:39.215601 | orchestrator | Saturday 22 March 2025 22:55:38 +0000 (0:00:00.208) 0:00:12.610 ******** 2025-03-22 22:55:39.215721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4729f66e-933a-5d14-9b0e-268b64ee2b75'}})  2025-03-22 22:55:39.216019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a6484f2-0da7-5943-9f10-427ab04c9a45'}})  2025-03-22 22:55:39.217058 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:39.217182 | orchestrator | 2025-03-22 22:55:39.218500 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-22 22:55:39.406844 | orchestrator | Saturday 22 March 2025 22:55:39 +0000 (0:00:00.226) 0:00:12.837 ******** 2025-03-22 22:55:39.406945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4729f66e-933a-5d14-9b0e-268b64ee2b75'}})  2025-03-22 22:55:39.407097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a6484f2-0da7-5943-9f10-427ab04c9a45'}})  2025-03-22 22:55:39.411086 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:39.411420 | orchestrator | 2025-03-22 22:55:39.411792 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-22 22:55:39.412388 | orchestrator | Saturday 22 March 2025 22:55:39 +0000 (0:00:00.195) 0:00:13.032 ******** 2025-03-22 22:55:39.651505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4729f66e-933a-5d14-9b0e-268b64ee2b75'}})  2025-03-22 22:55:39.652709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a6484f2-0da7-5943-9f10-427ab04c9a45'}})  2025-03-22 22:55:39.653337 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:39.653879 | orchestrator | 2025-03-22 22:55:39.656071 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-22 22:55:39.658002 | orchestrator | Saturday 22 March 2025 22:55:39 +0000 (0:00:00.242) 0:00:13.275 ******** 2025-03-22 22:55:39.839475 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:55:39.839697 | orchestrator | 2025-03-22 22:55:39.843170 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-22 22:55:39.843835 | orchestrator | Saturday 22 March 2025 22:55:39 +0000 (0:00:00.189) 0:00:13.465 ******** 2025-03-22 22:55:40.079572 | orchestrator | ok: [testbed-node-3] 2025-03-22 22:55:40.084097 | orchestrator | 2025-03-22 22:55:40.084167 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-22 22:55:40.084625 | orchestrator | Saturday 22 March 2025 22:55:40 +0000 (0:00:00.236) 0:00:13.701 ******** 2025-03-22 22:55:40.265878 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:40.267564 | orchestrator | 2025-03-22 22:55:40.269965 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-22 22:55:40.434279 | orchestrator | Saturday 22 March 2025 22:55:40 +0000 (0:00:00.189) 0:00:13.891 ******** 2025-03-22 22:55:40.434406 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:40.435187 | orchestrator | 2025-03-22 22:55:40.435983 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-22 22:55:40.436936 | orchestrator | Saturday 22 March 2025 22:55:40 +0000 (0:00:00.168) 0:00:14.059 ******** 2025-03-22 22:55:40.875230 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:40.876136 | orchestrator | 2025-03-22 22:55:40.880155 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-22 22:55:40.885063 | orchestrator | Saturday 22 March 2025 22:55:40 +0000 (0:00:00.438) 0:00:14.497 ******** 2025-03-22 22:55:41.116691 | orchestrator | ok: [testbed-node-3] => { 2025-03-22 22:55:41.117425 | orchestrator |  "ceph_osd_devices": { 2025-03-22 22:55:41.117918 | orchestrator |  "sdb": { 2025-03-22 22:55:41.121484 | orchestrator |  "osd_lvm_uuid": "4729f66e-933a-5d14-9b0e-268b64ee2b75" 2025-03-22 22:55:41.124733 | orchestrator |  }, 2025-03-22 22:55:41.125223 | orchestrator |  "sdc": { 2025-03-22 22:55:41.125716 | orchestrator |  "osd_lvm_uuid": "9a6484f2-0da7-5943-9f10-427ab04c9a45" 2025-03-22 22:55:41.126093 | orchestrator |  } 2025-03-22 22:55:41.126764 | orchestrator |  } 2025-03-22 22:55:41.127072 | orchestrator | } 2025-03-22 22:55:41.127301 | orchestrator | 2025-03-22 22:55:41.128094 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-22 22:55:41.128622 | orchestrator | Saturday 22 March 2025 22:55:41 +0000 (0:00:00.243) 0:00:14.741 ******** 2025-03-22 22:55:41.326616 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:41.326759 | orchestrator | 2025-03-22 22:55:41.327153 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-22 22:55:41.327695 | orchestrator | Saturday 22 March 2025 22:55:41 +0000 (0:00:00.209) 0:00:14.950 ******** 2025-03-22 22:55:41.537352 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:41.538387 | orchestrator | 2025-03-22 22:55:41.539845 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-22 22:55:41.540839 | orchestrator | Saturday 22 March 2025 22:55:41 +0000 (0:00:00.210) 0:00:15.160 ******** 2025-03-22 22:55:41.754923 | orchestrator | skipping: [testbed-node-3] 2025-03-22 22:55:41.755754 | orchestrator | 2025-03-22 22:55:41.756230 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-22 22:55:41.757602 | orchestrator | Saturday 22 March 2025 22:55:41 +0000 (0:00:00.212) 0:00:15.373 ******** 2025-03-22 22:55:42.125799 | orchestrator | changed: [testbed-node-3] => { 2025-03-22 22:55:42.128532 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-22 22:55:42.130105 | orchestrator |  "ceph_osd_devices": { 2025-03-22 22:55:42.131586 | orchestrator |  "sdb": { 2025-03-22 22:55:42.133363 | orchestrator |  "osd_lvm_uuid": "4729f66e-933a-5d14-9b0e-268b64ee2b75" 2025-03-22 22:55:42.134760 | orchestrator |  }, 2025-03-22 22:55:42.135759 | orchestrator |  "sdc": { 2025-03-22 22:55:42.137053 | orchestrator |  "osd_lvm_uuid": "9a6484f2-0da7-5943-9f10-427ab04c9a45" 2025-03-22 22:55:42.138466 | orchestrator |  } 2025-03-22 22:55:42.139432 | orchestrator |  }, 2025-03-22 22:55:42.140752 | orchestrator |  "lvm_volumes": [ 2025-03-22 22:55:42.141869 | orchestrator |  { 2025-03-22 22:55:42.143296 | orchestrator |  "data": "osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75", 2025-03-22 22:55:42.144450 | orchestrator |  "data_vg": "ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75" 2025-03-22 22:55:42.145899 | orchestrator |  }, 2025-03-22 22:55:42.146979 | orchestrator |  { 2025-03-22 22:55:42.148171 | orchestrator |  "data": "osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45", 2025-03-22 22:55:42.149334 | orchestrator |  "data_vg": "ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45" 2025-03-22 22:55:42.150858 | orchestrator |  } 2025-03-22 22:55:42.151926 | orchestrator |  ] 2025-03-22 22:55:42.151959 | orchestrator |  } 2025-03-22 22:55:42.152862 | orchestrator | } 2025-03-22 22:55:42.154195 | orchestrator | 2025-03-22 22:55:42.155346 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-22 22:55:42.156978 | orchestrator | Saturday 22 March 2025 22:55:42 +0000 (0:00:00.375) 0:00:15.749 ******** 2025-03-22 22:55:44.823449 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-22 22:55:44.823890 | orchestrator | 2025-03-22 22:55:44.823938 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-22 22:55:44.825291 | orchestrator | 2025-03-22 22:55:44.826273 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-22 22:55:44.826875 | orchestrator | Saturday 22 March 2025 22:55:44 +0000 (0:00:02.697) 0:00:18.446 ******** 2025-03-22 22:55:45.064801 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-22 22:55:45.065004 | orchestrator | 2025-03-22 22:55:45.065587 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-22 22:55:45.066090 | orchestrator | Saturday 22 March 2025 22:55:45 +0000 (0:00:00.244) 0:00:18.690 ******** 2025-03-22 22:55:45.295764 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:55:45.296549 | orchestrator | 2025-03-22 22:55:45.300303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:45.759615 | orchestrator | Saturday 22 March 2025 22:55:45 +0000 (0:00:00.230) 0:00:18.921 ******** 2025-03-22 22:55:45.759737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-22 22:55:45.760918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-22 22:55:45.760951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-22 22:55:45.762690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-22 22:55:45.764572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-22 22:55:45.765736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-22 22:55:45.766759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-22 22:55:45.768167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-22 22:55:45.769117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-22 22:55:45.770456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-22 22:55:45.771393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-22 22:55:45.772688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-22 22:55:45.773896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-22 22:55:45.774944 | orchestrator | 2025-03-22 22:55:45.775134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:45.775980 | orchestrator | Saturday 22 March 2025 22:55:45 +0000 (0:00:00.460) 0:00:19.382 ******** 2025-03-22 22:55:45.952488 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:45.952652 | orchestrator | 2025-03-22 22:55:45.956409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:45.956990 | orchestrator | Saturday 22 March 2025 22:55:45 +0000 (0:00:00.194) 0:00:19.577 ******** 2025-03-22 22:55:46.180365 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:46.182178 | orchestrator | 2025-03-22 22:55:46.182637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:46.183756 | orchestrator | Saturday 22 March 2025 22:55:46 +0000 (0:00:00.228) 0:00:19.806 ******** 2025-03-22 22:55:46.373876 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:46.801872 | orchestrator | 2025-03-22 22:55:46.801988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:46.802007 | orchestrator | Saturday 22 March 2025 22:55:46 +0000 (0:00:00.188) 0:00:19.995 ******** 2025-03-22 22:55:46.802093 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:46.998515 | orchestrator | 2025-03-22 22:55:46.998619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:46.998637 | orchestrator | Saturday 22 March 2025 22:55:46 +0000 (0:00:00.424) 0:00:20.420 ******** 2025-03-22 22:55:46.998666 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:47.001564 | orchestrator | 2025-03-22 22:55:47.002313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:47.002349 | orchestrator | Saturday 22 March 2025 22:55:46 +0000 (0:00:00.203) 0:00:20.623 ******** 2025-03-22 22:55:47.225548 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:47.225835 | orchestrator | 2025-03-22 22:55:47.229735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:47.425015 | orchestrator | Saturday 22 March 2025 22:55:47 +0000 (0:00:00.228) 0:00:20.851 ******** 2025-03-22 22:55:47.425097 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:47.426826 | orchestrator | 2025-03-22 22:55:47.431635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:47.617659 | orchestrator | Saturday 22 March 2025 22:55:47 +0000 (0:00:00.199) 0:00:21.050 ******** 2025-03-22 22:55:47.617741 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:47.620462 | orchestrator | 2025-03-22 22:55:47.623873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:47.623909 | orchestrator | Saturday 22 March 2025 22:55:47 +0000 (0:00:00.191) 0:00:21.242 ******** 2025-03-22 22:55:48.063758 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8830d5a0-b84d-4cff-a107-ff4c6c105a90) 2025-03-22 22:55:48.065353 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8830d5a0-b84d-4cff-a107-ff4c6c105a90) 2025-03-22 22:55:48.068094 | orchestrator | 2025-03-22 22:55:48.526556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:48.526655 | orchestrator | Saturday 22 March 2025 22:55:48 +0000 (0:00:00.447) 0:00:21.689 ******** 2025-03-22 22:55:48.526687 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b423c274-b2a0-4f0a-b616-ca1c2b60d0cd) 2025-03-22 22:55:48.529731 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b423c274-b2a0-4f0a-b616-ca1c2b60d0cd) 2025-03-22 22:55:48.529765 | orchestrator | 2025-03-22 22:55:48.531115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:48.532076 | orchestrator | Saturday 22 March 2025 22:55:48 +0000 (0:00:00.461) 0:00:22.150 ******** 2025-03-22 22:55:48.984742 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_57690c98-8cea-4402-9842-e7701133b4c4) 2025-03-22 22:55:48.986512 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_57690c98-8cea-4402-9842-e7701133b4c4) 2025-03-22 22:55:48.987458 | orchestrator | 2025-03-22 22:55:48.987495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:48.988138 | orchestrator | Saturday 22 March 2025 22:55:48 +0000 (0:00:00.458) 0:00:22.609 ******** 2025-03-22 22:55:49.573855 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_036a8c60-8400-4952-a958-bb8a1eba60c8) 2025-03-22 22:55:49.574520 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_036a8c60-8400-4952-a958-bb8a1eba60c8) 2025-03-22 22:55:49.575139 | orchestrator | 2025-03-22 22:55:49.575465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:55:49.579136 | orchestrator | Saturday 22 March 2025 22:55:49 +0000 (0:00:00.590) 0:00:23.200 ******** 2025-03-22 22:55:50.170732 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-22 22:55:50.171398 | orchestrator | 2025-03-22 22:55:50.171446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:50.172279 | orchestrator | Saturday 22 March 2025 22:55:50 +0000 (0:00:00.593) 0:00:23.794 ******** 2025-03-22 22:55:50.899496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-22 22:55:50.899653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-22 22:55:50.899995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-22 22:55:50.900518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-22 22:55:50.902566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-22 22:55:50.903125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-22 22:55:50.903152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-22 22:55:50.903167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-22 22:55:50.903186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-22 22:55:50.903382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-22 22:55:50.904048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-22 22:55:50.904384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-22 22:55:50.905270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-22 22:55:50.905857 | orchestrator | 2025-03-22 22:55:50.906413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:50.906843 | orchestrator | Saturday 22 March 2025 22:55:50 +0000 (0:00:00.730) 0:00:24.524 ******** 2025-03-22 22:55:51.132763 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:51.132944 | orchestrator | 2025-03-22 22:55:51.138341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:51.138424 | orchestrator | Saturday 22 March 2025 22:55:51 +0000 (0:00:00.234) 0:00:24.759 ******** 2025-03-22 22:55:51.369444 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:51.370004 | orchestrator | 2025-03-22 22:55:51.370738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:51.371450 | orchestrator | Saturday 22 March 2025 22:55:51 +0000 (0:00:00.234) 0:00:24.994 ******** 2025-03-22 22:55:51.577383 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:51.578860 | orchestrator | 2025-03-22 22:55:51.580375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:51.581438 | orchestrator | Saturday 22 March 2025 22:55:51 +0000 (0:00:00.209) 0:00:25.203 ******** 2025-03-22 22:55:51.803226 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:51.805196 | orchestrator | 2025-03-22 22:55:51.806118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:51.809549 | orchestrator | Saturday 22 March 2025 22:55:51 +0000 (0:00:00.224) 0:00:25.428 ******** 2025-03-22 22:55:52.035337 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:52.036716 | orchestrator | 2025-03-22 22:55:52.038983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:52.044152 | orchestrator | Saturday 22 March 2025 22:55:52 +0000 (0:00:00.230) 0:00:25.659 ******** 2025-03-22 22:55:52.327853 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:52.328445 | orchestrator | 2025-03-22 22:55:52.329725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:52.330602 | orchestrator | Saturday 22 March 2025 22:55:52 +0000 (0:00:00.294) 0:00:25.953 ******** 2025-03-22 22:55:52.559085 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:52.559444 | orchestrator | 2025-03-22 22:55:52.559900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:52.560971 | orchestrator | Saturday 22 March 2025 22:55:52 +0000 (0:00:00.230) 0:00:26.184 ******** 2025-03-22 22:55:52.801991 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:52.804902 | orchestrator | 2025-03-22 22:55:53.972711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:53.972820 | orchestrator | Saturday 22 March 2025 22:55:52 +0000 (0:00:00.241) 0:00:26.425 ******** 2025-03-22 22:55:53.972853 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-22 22:55:53.973125 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-22 22:55:53.974872 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-22 22:55:53.975378 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-22 22:55:53.976195 | orchestrator | 2025-03-22 22:55:53.977005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:53.977567 | orchestrator | Saturday 22 March 2025 22:55:53 +0000 (0:00:01.171) 0:00:27.596 ******** 2025-03-22 22:55:54.243554 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:54.243728 | orchestrator | 2025-03-22 22:55:54.244377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:54.245344 | orchestrator | Saturday 22 March 2025 22:55:54 +0000 (0:00:00.272) 0:00:27.869 ******** 2025-03-22 22:55:54.506604 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:54.508327 | orchestrator | 2025-03-22 22:55:54.509567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:54.510566 | orchestrator | Saturday 22 March 2025 22:55:54 +0000 (0:00:00.263) 0:00:28.132 ******** 2025-03-22 22:55:54.727323 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:54.729165 | orchestrator | 2025-03-22 22:55:54.952226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:55:54.952398 | orchestrator | Saturday 22 March 2025 22:55:54 +0000 (0:00:00.216) 0:00:28.348 ******** 2025-03-22 22:55:54.952434 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:54.952514 | orchestrator | 2025-03-22 22:55:54.952770 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-22 22:55:54.953143 | orchestrator | Saturday 22 March 2025 22:55:54 +0000 (0:00:00.229) 0:00:28.578 ******** 2025-03-22 22:55:55.143781 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-03-22 22:55:55.144323 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-03-22 22:55:55.145062 | orchestrator | 2025-03-22 22:55:55.145544 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-22 22:55:55.145937 | orchestrator | Saturday 22 March 2025 22:55:55 +0000 (0:00:00.190) 0:00:28.769 ******** 2025-03-22 22:55:55.288641 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:55.288852 | orchestrator | 2025-03-22 22:55:55.289500 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-22 22:55:55.289938 | orchestrator | Saturday 22 March 2025 22:55:55 +0000 (0:00:00.144) 0:00:28.914 ******** 2025-03-22 22:55:55.467582 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:55.468161 | orchestrator | 2025-03-22 22:55:55.468810 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-22 22:55:55.469497 | orchestrator | Saturday 22 March 2025 22:55:55 +0000 (0:00:00.179) 0:00:29.093 ******** 2025-03-22 22:55:55.627862 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:55.628904 | orchestrator | 2025-03-22 22:55:55.628956 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-22 22:55:55.628990 | orchestrator | Saturday 22 March 2025 22:55:55 +0000 (0:00:00.158) 0:00:29.252 ******** 2025-03-22 22:55:55.765887 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:55:55.766485 | orchestrator | 2025-03-22 22:55:55.766881 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-22 22:55:55.767024 | orchestrator | Saturday 22 March 2025 22:55:55 +0000 (0:00:00.137) 0:00:29.390 ******** 2025-03-22 22:55:55.980900 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '42fe63a2-cbe4-507e-bca1-965016e62eb5'}}) 2025-03-22 22:55:55.981394 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc4d25c6-b33b-5666-b63a-cd4494109919'}}) 2025-03-22 22:55:55.981852 | orchestrator | 2025-03-22 22:55:55.981878 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-22 22:55:55.981938 | orchestrator | Saturday 22 March 2025 22:55:55 +0000 (0:00:00.216) 0:00:29.606 ******** 2025-03-22 22:55:56.404541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '42fe63a2-cbe4-507e-bca1-965016e62eb5'}})  2025-03-22 22:55:56.405460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc4d25c6-b33b-5666-b63a-cd4494109919'}})  2025-03-22 22:55:56.405528 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:56.405548 | orchestrator | 2025-03-22 22:55:56.405569 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-22 22:55:56.405790 | orchestrator | Saturday 22 March 2025 22:55:56 +0000 (0:00:00.423) 0:00:30.029 ******** 2025-03-22 22:55:56.596078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '42fe63a2-cbe4-507e-bca1-965016e62eb5'}})  2025-03-22 22:55:56.596638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc4d25c6-b33b-5666-b63a-cd4494109919'}})  2025-03-22 22:55:56.598951 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:56.599361 | orchestrator | 2025-03-22 22:55:56.600267 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-22 22:55:56.601014 | orchestrator | Saturday 22 March 2025 22:55:56 +0000 (0:00:00.190) 0:00:30.220 ******** 2025-03-22 22:55:56.795130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '42fe63a2-cbe4-507e-bca1-965016e62eb5'}})  2025-03-22 22:55:56.795290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc4d25c6-b33b-5666-b63a-cd4494109919'}})  2025-03-22 22:55:56.796000 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:56.796339 | orchestrator | 2025-03-22 22:55:56.796829 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-22 22:55:56.797639 | orchestrator | Saturday 22 March 2025 22:55:56 +0000 (0:00:00.199) 0:00:30.419 ******** 2025-03-22 22:55:56.957141 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:55:56.958088 | orchestrator | 2025-03-22 22:55:56.959504 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-22 22:55:56.962122 | orchestrator | Saturday 22 March 2025 22:55:56 +0000 (0:00:00.162) 0:00:30.582 ******** 2025-03-22 22:55:57.109532 | orchestrator | ok: [testbed-node-4] 2025-03-22 22:55:57.110887 | orchestrator | 2025-03-22 22:55:57.110976 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-22 22:55:57.112330 | orchestrator | Saturday 22 March 2025 22:55:57 +0000 (0:00:00.152) 0:00:30.734 ******** 2025-03-22 22:55:57.247465 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:57.248283 | orchestrator | 2025-03-22 22:55:57.249968 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-22 22:55:57.250537 | orchestrator | Saturday 22 March 2025 22:55:57 +0000 (0:00:00.137) 0:00:30.872 ******** 2025-03-22 22:55:57.434683 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:57.435774 | orchestrator | 2025-03-22 22:55:57.437317 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-22 22:55:57.438500 | orchestrator | Saturday 22 March 2025 22:55:57 +0000 (0:00:00.185) 0:00:31.057 ******** 2025-03-22 22:55:57.572416 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:57.573327 | orchestrator | 2025-03-22 22:55:57.574785 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-22 22:55:57.576155 | orchestrator | Saturday 22 March 2025 22:55:57 +0000 (0:00:00.137) 0:00:31.195 ******** 2025-03-22 22:55:57.733832 | orchestrator | ok: [testbed-node-4] => { 2025-03-22 22:55:57.735104 | orchestrator |  "ceph_osd_devices": { 2025-03-22 22:55:57.736855 | orchestrator |  "sdb": { 2025-03-22 22:55:57.738130 | orchestrator |  "osd_lvm_uuid": "42fe63a2-cbe4-507e-bca1-965016e62eb5" 2025-03-22 22:55:57.738171 | orchestrator |  }, 2025-03-22 22:55:57.739030 | orchestrator |  "sdc": { 2025-03-22 22:55:57.740029 | orchestrator |  "osd_lvm_uuid": "dc4d25c6-b33b-5666-b63a-cd4494109919" 2025-03-22 22:55:57.740795 | orchestrator |  } 2025-03-22 22:55:57.741178 | orchestrator |  } 2025-03-22 22:55:57.741739 | orchestrator | } 2025-03-22 22:55:57.742203 | orchestrator | 2025-03-22 22:55:57.742926 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-22 22:55:57.743426 | orchestrator | Saturday 22 March 2025 22:55:57 +0000 (0:00:00.161) 0:00:31.357 ******** 2025-03-22 22:55:57.888417 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:57.889596 | orchestrator | 2025-03-22 22:55:57.889629 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-22 22:55:57.890480 | orchestrator | Saturday 22 March 2025 22:55:57 +0000 (0:00:00.154) 0:00:31.511 ******** 2025-03-22 22:55:58.035452 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:58.035792 | orchestrator | 2025-03-22 22:55:58.036526 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-22 22:55:58.037042 | orchestrator | Saturday 22 March 2025 22:55:58 +0000 (0:00:00.149) 0:00:31.661 ******** 2025-03-22 22:55:58.174329 | orchestrator | skipping: [testbed-node-4] 2025-03-22 22:55:58.175156 | orchestrator | 2025-03-22 22:55:58.176026 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-22 22:55:58.177342 | orchestrator | Saturday 22 March 2025 22:55:58 +0000 (0:00:00.137) 0:00:31.798 ******** 2025-03-22 22:55:58.757481 | orchestrator | changed: [testbed-node-4] => { 2025-03-22 22:55:58.758451 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-22 22:55:58.760367 | orchestrator |  "ceph_osd_devices": { 2025-03-22 22:55:58.763288 | orchestrator |  "sdb": { 2025-03-22 22:55:58.763322 | orchestrator |  "osd_lvm_uuid": "42fe63a2-cbe4-507e-bca1-965016e62eb5" 2025-03-22 22:55:58.763704 | orchestrator |  }, 2025-03-22 22:55:58.764360 | orchestrator |  "sdc": { 2025-03-22 22:55:58.764869 | orchestrator |  "osd_lvm_uuid": "dc4d25c6-b33b-5666-b63a-cd4494109919" 2025-03-22 22:55:58.765697 | orchestrator |  } 2025-03-22 22:55:58.765771 | orchestrator |  }, 2025-03-22 22:55:58.766313 | orchestrator |  "lvm_volumes": [ 2025-03-22 22:55:58.766865 | orchestrator |  { 2025-03-22 22:55:58.768542 | orchestrator |  "data": "osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5", 2025-03-22 22:55:58.769590 | orchestrator |  "data_vg": "ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5" 2025-03-22 22:55:58.770423 | orchestrator |  }, 2025-03-22 22:55:58.771506 | orchestrator |  { 2025-03-22 22:55:58.772340 | orchestrator |  "data": "osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919", 2025-03-22 22:55:58.773499 | orchestrator |  "data_vg": "ceph-dc4d25c6-b33b-5666-b63a-cd4494109919" 2025-03-22 22:55:58.774514 | orchestrator |  } 2025-03-22 22:55:58.774868 | orchestrator |  ] 2025-03-22 22:55:58.775650 | orchestrator |  } 2025-03-22 22:55:58.776296 | orchestrator | } 2025-03-22 22:55:58.776531 | orchestrator | 2025-03-22 22:55:58.777376 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-22 22:55:58.778854 | orchestrator | Saturday 22 March 2025 22:55:58 +0000 (0:00:00.584) 0:00:32.383 ******** 2025-03-22 22:56:00.324915 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-22 22:56:00.325071 | orchestrator | 2025-03-22 22:56:00.326200 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-22 22:56:00.327494 | orchestrator | 2025-03-22 22:56:00.328329 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-22 22:56:00.329478 | orchestrator | Saturday 22 March 2025 22:56:00 +0000 (0:00:01.563) 0:00:33.946 ******** 2025-03-22 22:56:00.603082 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-22 22:56:00.603270 | orchestrator | 2025-03-22 22:56:00.604543 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-22 22:56:00.604838 | orchestrator | Saturday 22 March 2025 22:56:00 +0000 (0:00:00.281) 0:00:34.228 ******** 2025-03-22 22:56:01.290834 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:56:01.291295 | orchestrator | 2025-03-22 22:56:01.293073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:01.293883 | orchestrator | Saturday 22 March 2025 22:56:01 +0000 (0:00:00.686) 0:00:34.915 ******** 2025-03-22 22:56:01.790149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-22 22:56:01.791176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-22 22:56:01.792096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-22 22:56:01.793278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-22 22:56:01.794096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-22 22:56:01.794886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-22 22:56:01.796062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-22 22:56:01.796621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-22 22:56:01.797476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-22 22:56:01.798073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-22 22:56:01.798765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-22 22:56:01.799183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-22 22:56:01.799639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-22 22:56:01.800412 | orchestrator | 2025-03-22 22:56:01.801038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:01.801223 | orchestrator | Saturday 22 March 2025 22:56:01 +0000 (0:00:00.499) 0:00:35.415 ******** 2025-03-22 22:56:02.073386 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:02.074178 | orchestrator | 2025-03-22 22:56:02.074879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:02.076285 | orchestrator | Saturday 22 March 2025 22:56:02 +0000 (0:00:00.280) 0:00:35.696 ******** 2025-03-22 22:56:02.318960 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:02.319749 | orchestrator | 2025-03-22 22:56:02.320178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:02.321046 | orchestrator | Saturday 22 March 2025 22:56:02 +0000 (0:00:00.247) 0:00:35.944 ******** 2025-03-22 22:56:02.551511 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:02.552692 | orchestrator | 2025-03-22 22:56:02.552948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:02.556030 | orchestrator | Saturday 22 March 2025 22:56:02 +0000 (0:00:00.232) 0:00:36.176 ******** 2025-03-22 22:56:02.771483 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:02.772276 | orchestrator | 2025-03-22 22:56:02.772526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:02.773156 | orchestrator | Saturday 22 March 2025 22:56:02 +0000 (0:00:00.220) 0:00:36.397 ******** 2025-03-22 22:56:03.051560 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:03.052232 | orchestrator | 2025-03-22 22:56:03.053110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:03.053815 | orchestrator | Saturday 22 March 2025 22:56:03 +0000 (0:00:00.274) 0:00:36.671 ******** 2025-03-22 22:56:03.302661 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:03.304228 | orchestrator | 2025-03-22 22:56:03.305111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:03.306434 | orchestrator | Saturday 22 March 2025 22:56:03 +0000 (0:00:00.255) 0:00:36.927 ******** 2025-03-22 22:56:03.526918 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:03.527677 | orchestrator | 2025-03-22 22:56:03.530404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:03.783908 | orchestrator | Saturday 22 March 2025 22:56:03 +0000 (0:00:00.224) 0:00:37.151 ******** 2025-03-22 22:56:03.784021 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:03.785426 | orchestrator | 2025-03-22 22:56:03.786337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:03.787330 | orchestrator | Saturday 22 March 2025 22:56:03 +0000 (0:00:00.257) 0:00:37.408 ******** 2025-03-22 22:56:04.624604 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9e9b02d0-ba34-4a5b-a8b6-7a2befe88955) 2025-03-22 22:56:04.625976 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9e9b02d0-ba34-4a5b-a8b6-7a2befe88955) 2025-03-22 22:56:04.626284 | orchestrator | 2025-03-22 22:56:04.627706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:04.629304 | orchestrator | Saturday 22 March 2025 22:56:04 +0000 (0:00:00.839) 0:00:38.248 ******** 2025-03-22 22:56:05.128550 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_753a6438-823e-47df-a447-41be61353e18) 2025-03-22 22:56:05.129000 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_753a6438-823e-47df-a447-41be61353e18) 2025-03-22 22:56:05.130168 | orchestrator | 2025-03-22 22:56:05.132196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:05.133890 | orchestrator | Saturday 22 March 2025 22:56:05 +0000 (0:00:00.503) 0:00:38.751 ******** 2025-03-22 22:56:05.621167 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18369482-6d33-4fed-9778-d084c11eaa5e) 2025-03-22 22:56:05.621394 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18369482-6d33-4fed-9778-d084c11eaa5e) 2025-03-22 22:56:05.621772 | orchestrator | 2025-03-22 22:56:05.623331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:05.623678 | orchestrator | Saturday 22 March 2025 22:56:05 +0000 (0:00:00.494) 0:00:39.246 ******** 2025-03-22 22:56:06.128380 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af10e111-d90b-4be1-a196-da98d242bbc6) 2025-03-22 22:56:06.128536 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af10e111-d90b-4be1-a196-da98d242bbc6) 2025-03-22 22:56:06.128557 | orchestrator | 2025-03-22 22:56:06.128577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 22:56:06.131337 | orchestrator | Saturday 22 March 2025 22:56:06 +0000 (0:00:00.500) 0:00:39.746 ******** 2025-03-22 22:56:06.517350 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-22 22:56:06.517482 | orchestrator | 2025-03-22 22:56:06.518082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:06.518113 | orchestrator | Saturday 22 March 2025 22:56:06 +0000 (0:00:00.394) 0:00:40.140 ******** 2025-03-22 22:56:06.982523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-22 22:56:06.983180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-22 22:56:06.983215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-22 22:56:06.983654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-22 22:56:06.984185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-22 22:56:06.984913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-22 22:56:06.987420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-22 22:56:06.987820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-22 22:56:06.987843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-22 22:56:06.987858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-22 22:56:06.987872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-22 22:56:06.987886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-22 22:56:06.987905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-22 22:56:06.988129 | orchestrator | 2025-03-22 22:56:06.988609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:06.988948 | orchestrator | Saturday 22 March 2025 22:56:06 +0000 (0:00:00.466) 0:00:40.607 ******** 2025-03-22 22:56:07.208201 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:07.208319 | orchestrator | 2025-03-22 22:56:07.209259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:07.210148 | orchestrator | Saturday 22 March 2025 22:56:07 +0000 (0:00:00.225) 0:00:40.832 ******** 2025-03-22 22:56:07.422669 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:07.423488 | orchestrator | 2025-03-22 22:56:07.423906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:07.424671 | orchestrator | Saturday 22 March 2025 22:56:07 +0000 (0:00:00.216) 0:00:41.048 ******** 2025-03-22 22:56:07.647681 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:07.648044 | orchestrator | 2025-03-22 22:56:07.648809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:07.649669 | orchestrator | Saturday 22 March 2025 22:56:07 +0000 (0:00:00.224) 0:00:41.273 ******** 2025-03-22 22:56:08.332261 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:08.333004 | orchestrator | 2025-03-22 22:56:08.333939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:08.334734 | orchestrator | Saturday 22 March 2025 22:56:08 +0000 (0:00:00.681) 0:00:41.955 ******** 2025-03-22 22:56:08.555795 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:08.556388 | orchestrator | 2025-03-22 22:56:08.556415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:08.556745 | orchestrator | Saturday 22 March 2025 22:56:08 +0000 (0:00:00.224) 0:00:42.180 ******** 2025-03-22 22:56:08.792384 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:08.793213 | orchestrator | 2025-03-22 22:56:08.794158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:08.795305 | orchestrator | Saturday 22 March 2025 22:56:08 +0000 (0:00:00.236) 0:00:42.416 ******** 2025-03-22 22:56:09.021316 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:09.021845 | orchestrator | 2025-03-22 22:56:09.022924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:09.023771 | orchestrator | Saturday 22 March 2025 22:56:09 +0000 (0:00:00.227) 0:00:42.644 ******** 2025-03-22 22:56:09.257181 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:09.257323 | orchestrator | 2025-03-22 22:56:09.258160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:09.258694 | orchestrator | Saturday 22 March 2025 22:56:09 +0000 (0:00:00.238) 0:00:42.882 ******** 2025-03-22 22:56:09.943625 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-22 22:56:09.944277 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-22 22:56:09.947427 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-22 22:56:09.947795 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-22 22:56:09.947955 | orchestrator | 2025-03-22 22:56:09.948474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:09.949158 | orchestrator | Saturday 22 March 2025 22:56:09 +0000 (0:00:00.684) 0:00:43.567 ******** 2025-03-22 22:56:10.196436 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:10.197359 | orchestrator | 2025-03-22 22:56:10.197390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:10.197446 | orchestrator | Saturday 22 March 2025 22:56:10 +0000 (0:00:00.253) 0:00:43.821 ******** 2025-03-22 22:56:10.443617 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:10.651678 | orchestrator | 2025-03-22 22:56:10.652440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:10.652472 | orchestrator | Saturday 22 March 2025 22:56:10 +0000 (0:00:00.242) 0:00:44.063 ******** 2025-03-22 22:56:10.652502 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:10.653267 | orchestrator | 2025-03-22 22:56:10.657860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 22:56:10.658137 | orchestrator | Saturday 22 March 2025 22:56:10 +0000 (0:00:00.213) 0:00:44.276 ******** 2025-03-22 22:56:10.884698 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:10.886334 | orchestrator | 2025-03-22 22:56:10.887465 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-22 22:56:10.891658 | orchestrator | Saturday 22 March 2025 22:56:10 +0000 (0:00:00.232) 0:00:44.509 ******** 2025-03-22 22:56:11.305529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-03-22 22:56:11.307824 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-03-22 22:56:11.309120 | orchestrator | 2025-03-22 22:56:11.310322 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-22 22:56:11.312338 | orchestrator | Saturday 22 March 2025 22:56:11 +0000 (0:00:00.420) 0:00:44.929 ******** 2025-03-22 22:56:11.479856 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:11.614086 | orchestrator | 2025-03-22 22:56:11.614179 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-22 22:56:11.614198 | orchestrator | Saturday 22 March 2025 22:56:11 +0000 (0:00:00.171) 0:00:45.101 ******** 2025-03-22 22:56:11.614226 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:11.615779 | orchestrator | 2025-03-22 22:56:11.616578 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-22 22:56:11.617371 | orchestrator | Saturday 22 March 2025 22:56:11 +0000 (0:00:00.136) 0:00:45.237 ******** 2025-03-22 22:56:11.803748 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:11.804121 | orchestrator | 2025-03-22 22:56:11.804708 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-22 22:56:11.805171 | orchestrator | Saturday 22 March 2025 22:56:11 +0000 (0:00:00.192) 0:00:45.429 ******** 2025-03-22 22:56:11.960206 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:56:11.960679 | orchestrator | 2025-03-22 22:56:11.961145 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-22 22:56:11.961614 | orchestrator | Saturday 22 March 2025 22:56:11 +0000 (0:00:00.156) 0:00:45.586 ******** 2025-03-22 22:56:12.157910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}}) 2025-03-22 22:56:12.160551 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54fa1689-32eb-51dd-8e84-55dfa69ec772'}}) 2025-03-22 22:56:12.163527 | orchestrator | 2025-03-22 22:56:12.163621 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-22 22:56:12.164058 | orchestrator | Saturday 22 March 2025 22:56:12 +0000 (0:00:00.196) 0:00:45.782 ******** 2025-03-22 22:56:12.325232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}})  2025-03-22 22:56:12.326692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54fa1689-32eb-51dd-8e84-55dfa69ec772'}})  2025-03-22 22:56:12.327861 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:12.328825 | orchestrator | 2025-03-22 22:56:12.330513 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-22 22:56:12.331218 | orchestrator | Saturday 22 March 2025 22:56:12 +0000 (0:00:00.167) 0:00:45.950 ******** 2025-03-22 22:56:12.514497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}})  2025-03-22 22:56:12.515066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54fa1689-32eb-51dd-8e84-55dfa69ec772'}})  2025-03-22 22:56:12.515738 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:12.516197 | orchestrator | 2025-03-22 22:56:12.516831 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-22 22:56:12.516970 | orchestrator | Saturday 22 March 2025 22:56:12 +0000 (0:00:00.189) 0:00:46.140 ******** 2025-03-22 22:56:12.703998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}})  2025-03-22 22:56:12.704529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54fa1689-32eb-51dd-8e84-55dfa69ec772'}})  2025-03-22 22:56:12.704660 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:12.705174 | orchestrator | 2025-03-22 22:56:12.706488 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-22 22:56:12.706864 | orchestrator | Saturday 22 March 2025 22:56:12 +0000 (0:00:00.189) 0:00:46.329 ******** 2025-03-22 22:56:12.868384 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:56:12.869027 | orchestrator | 2025-03-22 22:56:12.869915 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-22 22:56:12.870560 | orchestrator | Saturday 22 March 2025 22:56:12 +0000 (0:00:00.164) 0:00:46.494 ******** 2025-03-22 22:56:13.053613 | orchestrator | ok: [testbed-node-5] 2025-03-22 22:56:13.054502 | orchestrator | 2025-03-22 22:56:13.055284 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-22 22:56:13.055317 | orchestrator | Saturday 22 March 2025 22:56:13 +0000 (0:00:00.183) 0:00:46.678 ******** 2025-03-22 22:56:13.228421 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:13.229230 | orchestrator | 2025-03-22 22:56:13.229944 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-22 22:56:13.230712 | orchestrator | Saturday 22 March 2025 22:56:13 +0000 (0:00:00.174) 0:00:46.853 ******** 2025-03-22 22:56:13.643903 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:13.645195 | orchestrator | 2025-03-22 22:56:13.646145 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-22 22:56:13.646478 | orchestrator | Saturday 22 March 2025 22:56:13 +0000 (0:00:00.412) 0:00:47.265 ******** 2025-03-22 22:56:13.794374 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:13.794909 | orchestrator | 2025-03-22 22:56:13.795385 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-22 22:56:13.795970 | orchestrator | Saturday 22 March 2025 22:56:13 +0000 (0:00:00.155) 0:00:47.420 ******** 2025-03-22 22:56:13.940830 | orchestrator | ok: [testbed-node-5] => { 2025-03-22 22:56:13.941449 | orchestrator |  "ceph_osd_devices": { 2025-03-22 22:56:13.942096 | orchestrator |  "sdb": { 2025-03-22 22:56:13.942666 | orchestrator |  "osd_lvm_uuid": "cd0beb18-7cb7-5f7e-bb8d-a321f863b568" 2025-03-22 22:56:13.943919 | orchestrator |  }, 2025-03-22 22:56:13.944464 | orchestrator |  "sdc": { 2025-03-22 22:56:13.945094 | orchestrator |  "osd_lvm_uuid": "54fa1689-32eb-51dd-8e84-55dfa69ec772" 2025-03-22 22:56:13.945912 | orchestrator |  } 2025-03-22 22:56:13.946223 | orchestrator |  } 2025-03-22 22:56:13.947362 | orchestrator | } 2025-03-22 22:56:13.947902 | orchestrator | 2025-03-22 22:56:13.948653 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-22 22:56:13.948976 | orchestrator | Saturday 22 March 2025 22:56:13 +0000 (0:00:00.146) 0:00:47.566 ******** 2025-03-22 22:56:14.117454 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:14.117619 | orchestrator | 2025-03-22 22:56:14.117891 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-22 22:56:14.117988 | orchestrator | Saturday 22 March 2025 22:56:14 +0000 (0:00:00.175) 0:00:47.742 ******** 2025-03-22 22:56:14.258075 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:14.258823 | orchestrator | 2025-03-22 22:56:14.259562 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-22 22:56:14.259874 | orchestrator | Saturday 22 March 2025 22:56:14 +0000 (0:00:00.141) 0:00:47.883 ******** 2025-03-22 22:56:14.400483 | orchestrator | skipping: [testbed-node-5] 2025-03-22 22:56:14.400616 | orchestrator | 2025-03-22 22:56:14.401401 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-22 22:56:14.401821 | orchestrator | Saturday 22 March 2025 22:56:14 +0000 (0:00:00.141) 0:00:48.025 ******** 2025-03-22 22:56:14.699332 | orchestrator | changed: [testbed-node-5] => { 2025-03-22 22:56:14.700454 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-22 22:56:14.701334 | orchestrator |  "ceph_osd_devices": { 2025-03-22 22:56:14.701788 | orchestrator |  "sdb": { 2025-03-22 22:56:14.702787 | orchestrator |  "osd_lvm_uuid": "cd0beb18-7cb7-5f7e-bb8d-a321f863b568" 2025-03-22 22:56:14.703536 | orchestrator |  }, 2025-03-22 22:56:14.703824 | orchestrator |  "sdc": { 2025-03-22 22:56:14.705440 | orchestrator |  "osd_lvm_uuid": "54fa1689-32eb-51dd-8e84-55dfa69ec772" 2025-03-22 22:56:14.706362 | orchestrator |  } 2025-03-22 22:56:14.707576 | orchestrator |  }, 2025-03-22 22:56:14.708531 | orchestrator |  "lvm_volumes": [ 2025-03-22 22:56:14.709334 | orchestrator |  { 2025-03-22 22:56:14.710097 | orchestrator |  "data": "osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568", 2025-03-22 22:56:14.710127 | orchestrator |  "data_vg": "ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568" 2025-03-22 22:56:14.710809 | orchestrator |  }, 2025-03-22 22:56:14.711285 | orchestrator |  { 2025-03-22 22:56:14.712033 | orchestrator |  "data": "osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772", 2025-03-22 22:56:14.712930 | orchestrator |  "data_vg": "ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772" 2025-03-22 22:56:14.714263 | orchestrator |  } 2025-03-22 22:56:14.715019 | orchestrator |  ] 2025-03-22 22:56:14.715049 | orchestrator |  } 2025-03-22 22:56:14.715473 | orchestrator | } 2025-03-22 22:56:14.716270 | orchestrator | 2025-03-22 22:56:14.717027 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-22 22:56:14.717426 | orchestrator | Saturday 22 March 2025 22:56:14 +0000 (0:00:00.298) 0:00:48.324 ******** 2025-03-22 22:56:16.138383 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-22 22:56:16.139441 | orchestrator | 2025-03-22 22:56:16.140198 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 22:56:16.141202 | orchestrator | 2025-03-22 22:56:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 22:56:16.142148 | orchestrator | 2025-03-22 22:56:16 | INFO  | Please wait and do not abort execution. 2025-03-22 22:56:16.142199 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-22 22:56:16.144139 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-22 22:56:16.145949 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-22 22:56:16.146491 | orchestrator | 2025-03-22 22:56:16.147896 | orchestrator | 2025-03-22 22:56:16.148120 | orchestrator | 2025-03-22 22:56:16.149450 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 22:56:16.150216 | orchestrator | Saturday 22 March 2025 22:56:16 +0000 (0:00:01.437) 0:00:49.761 ******** 2025-03-22 22:56:16.154094 | orchestrator | =============================================================================== 2025-03-22 22:56:16.154190 | orchestrator | Write configuration file ------------------------------------------------ 5.70s 2025-03-22 22:56:16.159593 | orchestrator | Add known links to the list of available block devices ------------------ 1.73s 2025-03-22 22:56:16.160410 | orchestrator | Add known partitions to the list of available block devices ------------- 1.68s 2025-03-22 22:56:16.161377 | orchestrator | Print configuration data ------------------------------------------------ 1.26s 2025-03-22 22:56:16.162150 | orchestrator | Get initial list of available block devices ----------------------------- 1.18s 2025-03-22 22:56:16.162983 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-03-22 22:56:16.163837 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2025-03-22 22:56:16.167201 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-03-22 22:56:16.168852 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2025-03-22 22:56:16.169363 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.82s 2025-03-22 22:56:16.170073 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-03-22 22:56:16.170355 | orchestrator | Set WAL devices config data --------------------------------------------- 0.77s 2025-03-22 22:56:16.170671 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-03-22 22:56:16.171508 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.73s 2025-03-22 22:56:16.171618 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.70s 2025-03-22 22:56:16.172168 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-03-22 22:56:16.172433 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-03-22 22:56:16.172857 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.63s 2025-03-22 22:56:16.173213 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.62s 2025-03-22 22:56:16.173487 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-03-22 22:56:28.456971 | orchestrator | 2025-03-22 22:56:28 | INFO  | Task 765f94ef-3d62-4a1b-8ffd-fb7a18e7c621 is running in background. Output coming soon. 2025-03-22 23:56:30.696566 | orchestrator | 2025-03-22 23:56:30 | INFO  | Task e1828ef8-dec7-4abe-a120-950cdc18c6f7 (ceph-create-lvm-devices) was prepared for execution. 2025-03-22 23:56:33.938996 | orchestrator | 2025-03-22 23:56:30 | INFO  | It takes a moment until task e1828ef8-dec7-4abe-a120-950cdc18c6f7 (ceph-create-lvm-devices) has been started and output is visible here. 2025-03-22 23:56:33.939141 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-22 23:56:34.517940 | orchestrator | 2025-03-22 23:56:34.519288 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-22 23:56:34.524469 | orchestrator | 2025-03-22 23:56:34.762732 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-22 23:56:34.762825 | orchestrator | Saturday 22 March 2025 23:56:34 +0000 (0:00:00.487) 0:00:00.487 ******** 2025-03-22 23:56:34.762858 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-22 23:56:34.764029 | orchestrator | 2025-03-22 23:56:34.764311 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-22 23:56:34.765113 | orchestrator | Saturday 22 March 2025 23:56:34 +0000 (0:00:00.248) 0:00:00.736 ******** 2025-03-22 23:56:35.025003 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:35.025156 | orchestrator | 2025-03-22 23:56:35.025185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:35.025470 | orchestrator | Saturday 22 March 2025 23:56:35 +0000 (0:00:00.261) 0:00:00.997 ******** 2025-03-22 23:56:35.953362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-22 23:56:35.956169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-22 23:56:35.958086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-22 23:56:35.958124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-22 23:56:35.959319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-22 23:56:35.960291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-22 23:56:35.961417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-22 23:56:35.962094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-22 23:56:35.962655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-22 23:56:35.964198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-22 23:56:35.964937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-22 23:56:35.965717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-22 23:56:35.966505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-22 23:56:35.966675 | orchestrator | 2025-03-22 23:56:35.967535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:35.967634 | orchestrator | Saturday 22 March 2025 23:56:35 +0000 (0:00:00.928) 0:00:01.925 ******** 2025-03-22 23:56:36.173288 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:36.174469 | orchestrator | 2025-03-22 23:56:36.175507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:36.177741 | orchestrator | Saturday 22 March 2025 23:56:36 +0000 (0:00:00.203) 0:00:02.129 ******** 2025-03-22 23:56:36.375507 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:36.375991 | orchestrator | 2025-03-22 23:56:36.376716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:36.377665 | orchestrator | Saturday 22 March 2025 23:56:36 +0000 (0:00:00.219) 0:00:02.349 ******** 2025-03-22 23:56:36.583946 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:36.584101 | orchestrator | 2025-03-22 23:56:36.584698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:36.802219 | orchestrator | Saturday 22 March 2025 23:56:36 +0000 (0:00:00.206) 0:00:02.555 ******** 2025-03-22 23:56:36.802330 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:36.802695 | orchestrator | 2025-03-22 23:56:36.802728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:36.803299 | orchestrator | Saturday 22 March 2025 23:56:36 +0000 (0:00:00.216) 0:00:02.771 ******** 2025-03-22 23:56:37.024548 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:37.024718 | orchestrator | 2025-03-22 23:56:37.025494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:37.026797 | orchestrator | Saturday 22 March 2025 23:56:37 +0000 (0:00:00.226) 0:00:02.997 ******** 2025-03-22 23:56:37.289135 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:37.290301 | orchestrator | 2025-03-22 23:56:37.291260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:37.292470 | orchestrator | Saturday 22 March 2025 23:56:37 +0000 (0:00:00.264) 0:00:03.262 ******** 2025-03-22 23:56:37.491012 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:37.492771 | orchestrator | 2025-03-22 23:56:37.495092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:37.495752 | orchestrator | Saturday 22 March 2025 23:56:37 +0000 (0:00:00.201) 0:00:03.464 ******** 2025-03-22 23:56:37.690246 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:37.691280 | orchestrator | 2025-03-22 23:56:37.692236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:37.694796 | orchestrator | Saturday 22 March 2025 23:56:37 +0000 (0:00:00.199) 0:00:03.664 ******** 2025-03-22 23:56:38.621568 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d944c393-c469-4703-9a84-253eb786ae38) 2025-03-22 23:56:38.622847 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d944c393-c469-4703-9a84-253eb786ae38) 2025-03-22 23:56:38.624120 | orchestrator | 2025-03-22 23:56:38.624607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:38.625803 | orchestrator | Saturday 22 March 2025 23:56:38 +0000 (0:00:00.929) 0:00:04.593 ******** 2025-03-22 23:56:39.134324 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_873c6414-afc7-40f1-8cf8-9106a041fae2) 2025-03-22 23:56:39.136745 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_873c6414-afc7-40f1-8cf8-9106a041fae2) 2025-03-22 23:56:39.137008 | orchestrator | 2025-03-22 23:56:39.137659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:39.138180 | orchestrator | Saturday 22 March 2025 23:56:39 +0000 (0:00:00.510) 0:00:05.104 ******** 2025-03-22 23:56:39.620995 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34749356-9908-4430-b6a3-abe4e540ecc5) 2025-03-22 23:56:39.625756 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34749356-9908-4430-b6a3-abe4e540ecc5) 2025-03-22 23:56:39.626245 | orchestrator | 2025-03-22 23:56:39.626711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:39.627630 | orchestrator | Saturday 22 March 2025 23:56:39 +0000 (0:00:00.488) 0:00:05.592 ******** 2025-03-22 23:56:40.193133 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_708adc92-3837-440f-909c-446edf0d18e7) 2025-03-22 23:56:40.193276 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_708adc92-3837-440f-909c-446edf0d18e7) 2025-03-22 23:56:40.193964 | orchestrator | 2025-03-22 23:56:40.196476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:56:40.196807 | orchestrator | Saturday 22 March 2025 23:56:40 +0000 (0:00:00.571) 0:00:06.164 ******** 2025-03-22 23:56:40.558443 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-22 23:56:40.560446 | orchestrator | 2025-03-22 23:56:40.560519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:40.560606 | orchestrator | Saturday 22 March 2025 23:56:40 +0000 (0:00:00.366) 0:00:06.531 ******** 2025-03-22 23:56:41.065173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-22 23:56:41.067065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-22 23:56:41.068360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-22 23:56:41.070683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-22 23:56:41.071046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-22 23:56:41.071999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-22 23:56:41.072822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-22 23:56:41.073107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-22 23:56:41.073759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-22 23:56:41.073902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-22 23:56:41.074370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-22 23:56:41.074897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-22 23:56:41.075042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-22 23:56:41.075440 | orchestrator | 2025-03-22 23:56:41.075909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:41.076166 | orchestrator | Saturday 22 March 2025 23:56:41 +0000 (0:00:00.507) 0:00:07.038 ******** 2025-03-22 23:56:41.272890 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:41.274122 | orchestrator | 2025-03-22 23:56:41.275030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:41.276000 | orchestrator | Saturday 22 March 2025 23:56:41 +0000 (0:00:00.207) 0:00:07.246 ******** 2025-03-22 23:56:41.477179 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:41.477311 | orchestrator | 2025-03-22 23:56:41.477337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:41.477684 | orchestrator | Saturday 22 March 2025 23:56:41 +0000 (0:00:00.204) 0:00:07.450 ******** 2025-03-22 23:56:41.692439 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:41.692635 | orchestrator | 2025-03-22 23:56:41.692948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:41.695408 | orchestrator | Saturday 22 March 2025 23:56:41 +0000 (0:00:00.214) 0:00:07.665 ******** 2025-03-22 23:56:41.942473 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:41.943021 | orchestrator | 2025-03-22 23:56:41.943049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:41.943069 | orchestrator | Saturday 22 March 2025 23:56:41 +0000 (0:00:00.248) 0:00:07.914 ******** 2025-03-22 23:56:42.402248 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:42.403246 | orchestrator | 2025-03-22 23:56:42.403283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:42.404327 | orchestrator | Saturday 22 March 2025 23:56:42 +0000 (0:00:00.458) 0:00:08.372 ******** 2025-03-22 23:56:42.616156 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:42.618114 | orchestrator | 2025-03-22 23:56:42.619208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:42.619239 | orchestrator | Saturday 22 March 2025 23:56:42 +0000 (0:00:00.214) 0:00:08.587 ******** 2025-03-22 23:56:42.815489 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:42.818175 | orchestrator | 2025-03-22 23:56:42.818686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:42.819571 | orchestrator | Saturday 22 March 2025 23:56:42 +0000 (0:00:00.200) 0:00:08.787 ******** 2025-03-22 23:56:43.046950 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:43.048266 | orchestrator | 2025-03-22 23:56:43.048629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:43.049640 | orchestrator | Saturday 22 March 2025 23:56:43 +0000 (0:00:00.232) 0:00:09.020 ******** 2025-03-22 23:56:43.745407 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-22 23:56:43.746816 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-22 23:56:43.748395 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-22 23:56:43.748462 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-22 23:56:43.749463 | orchestrator | 2025-03-22 23:56:43.749973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:43.750899 | orchestrator | Saturday 22 March 2025 23:56:43 +0000 (0:00:00.697) 0:00:09.717 ******** 2025-03-22 23:56:43.977032 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:43.977213 | orchestrator | 2025-03-22 23:56:43.978304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:43.979019 | orchestrator | Saturday 22 March 2025 23:56:43 +0000 (0:00:00.232) 0:00:09.950 ******** 2025-03-22 23:56:44.215549 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:44.216953 | orchestrator | 2025-03-22 23:56:44.218812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:44.220158 | orchestrator | Saturday 22 March 2025 23:56:44 +0000 (0:00:00.236) 0:00:10.186 ******** 2025-03-22 23:56:44.425250 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:44.425793 | orchestrator | 2025-03-22 23:56:44.426817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:56:44.427803 | orchestrator | Saturday 22 March 2025 23:56:44 +0000 (0:00:00.212) 0:00:10.399 ******** 2025-03-22 23:56:44.634281 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:44.634935 | orchestrator | 2025-03-22 23:56:44.635494 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-22 23:56:44.636722 | orchestrator | Saturday 22 March 2025 23:56:44 +0000 (0:00:00.207) 0:00:10.606 ******** 2025-03-22 23:56:44.778063 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:44.778765 | orchestrator | 2025-03-22 23:56:44.779639 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-22 23:56:44.780046 | orchestrator | Saturday 22 March 2025 23:56:44 +0000 (0:00:00.144) 0:00:10.751 ******** 2025-03-22 23:56:45.202225 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4729f66e-933a-5d14-9b0e-268b64ee2b75'}}) 2025-03-22 23:56:45.203157 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a6484f2-0da7-5943-9f10-427ab04c9a45'}}) 2025-03-22 23:56:45.204530 | orchestrator | 2025-03-22 23:56:45.205376 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-22 23:56:45.205409 | orchestrator | Saturday 22 March 2025 23:56:45 +0000 (0:00:00.423) 0:00:11.174 ******** 2025-03-22 23:56:47.540622 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'}) 2025-03-22 23:56:47.543012 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'}) 2025-03-22 23:56:47.543062 | orchestrator | 2025-03-22 23:56:47.543999 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-22 23:56:47.544050 | orchestrator | Saturday 22 March 2025 23:56:47 +0000 (0:00:02.337) 0:00:13.512 ******** 2025-03-22 23:56:47.706242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:47.708437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:47.710936 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:47.712337 | orchestrator | 2025-03-22 23:56:47.712438 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-22 23:56:47.712469 | orchestrator | Saturday 22 March 2025 23:56:47 +0000 (0:00:00.167) 0:00:13.680 ******** 2025-03-22 23:56:49.349500 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'}) 2025-03-22 23:56:49.350279 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'}) 2025-03-22 23:56:49.350322 | orchestrator | 2025-03-22 23:56:49.351153 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-22 23:56:49.351247 | orchestrator | Saturday 22 March 2025 23:56:49 +0000 (0:00:01.641) 0:00:15.321 ******** 2025-03-22 23:56:49.521973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:49.525802 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:49.526436 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:49.526493 | orchestrator | 2025-03-22 23:56:49.526513 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-22 23:56:49.526837 | orchestrator | Saturday 22 March 2025 23:56:49 +0000 (0:00:00.173) 0:00:15.495 ******** 2025-03-22 23:56:49.685617 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:49.686929 | orchestrator | 2025-03-22 23:56:49.687102 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-22 23:56:49.689923 | orchestrator | Saturday 22 March 2025 23:56:49 +0000 (0:00:00.163) 0:00:15.658 ******** 2025-03-22 23:56:49.862764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:49.864214 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:49.866703 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:49.866793 | orchestrator | 2025-03-22 23:56:49.866816 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-22 23:56:49.867411 | orchestrator | Saturday 22 March 2025 23:56:49 +0000 (0:00:00.177) 0:00:15.835 ******** 2025-03-22 23:56:50.010124 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:50.010316 | orchestrator | 2025-03-22 23:56:50.011026 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-22 23:56:50.011784 | orchestrator | Saturday 22 March 2025 23:56:50 +0000 (0:00:00.147) 0:00:15.983 ******** 2025-03-22 23:56:50.188487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:50.189574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:50.190096 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:50.190990 | orchestrator | 2025-03-22 23:56:50.191519 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-22 23:56:50.192190 | orchestrator | Saturday 22 March 2025 23:56:50 +0000 (0:00:00.179) 0:00:16.163 ******** 2025-03-22 23:56:50.506540 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:50.507334 | orchestrator | 2025-03-22 23:56:50.508462 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-22 23:56:50.509335 | orchestrator | Saturday 22 March 2025 23:56:50 +0000 (0:00:00.317) 0:00:16.480 ******** 2025-03-22 23:56:50.728928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:50.729411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:50.730495 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:50.731665 | orchestrator | 2025-03-22 23:56:50.732345 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-22 23:56:50.732778 | orchestrator | Saturday 22 March 2025 23:56:50 +0000 (0:00:00.219) 0:00:16.700 ******** 2025-03-22 23:56:50.881550 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:50.882457 | orchestrator | 2025-03-22 23:56:50.882722 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-22 23:56:50.882751 | orchestrator | Saturday 22 March 2025 23:56:50 +0000 (0:00:00.155) 0:00:16.855 ******** 2025-03-22 23:56:51.065452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:51.065826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:51.069707 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:51.070313 | orchestrator | 2025-03-22 23:56:51.073448 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-22 23:56:51.073648 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.178) 0:00:17.034 ******** 2025-03-22 23:56:51.237638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:51.237870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:51.238901 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:51.239229 | orchestrator | 2025-03-22 23:56:51.241865 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-22 23:56:51.242762 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.176) 0:00:17.210 ******** 2025-03-22 23:56:51.418627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:51.419833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:51.421234 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:51.423078 | orchestrator | 2025-03-22 23:56:51.424069 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-22 23:56:51.424745 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.181) 0:00:17.392 ******** 2025-03-22 23:56:51.559544 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:51.560087 | orchestrator | 2025-03-22 23:56:51.560495 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-22 23:56:51.563271 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.140) 0:00:17.532 ******** 2025-03-22 23:56:51.712235 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:51.713336 | orchestrator | 2025-03-22 23:56:51.713384 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-22 23:56:51.715327 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.149) 0:00:17.682 ******** 2025-03-22 23:56:51.850351 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:51.850961 | orchestrator | 2025-03-22 23:56:51.851716 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-22 23:56:51.852691 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.141) 0:00:17.824 ******** 2025-03-22 23:56:51.996113 | orchestrator | ok: [testbed-node-3] => { 2025-03-22 23:56:51.997681 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-22 23:56:51.999926 | orchestrator | } 2025-03-22 23:56:52.001102 | orchestrator | 2025-03-22 23:56:52.001713 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-22 23:56:52.002119 | orchestrator | Saturday 22 March 2025 23:56:51 +0000 (0:00:00.144) 0:00:17.968 ******** 2025-03-22 23:56:52.138998 | orchestrator | ok: [testbed-node-3] => { 2025-03-22 23:56:52.139938 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-22 23:56:52.140761 | orchestrator | } 2025-03-22 23:56:52.141237 | orchestrator | 2025-03-22 23:56:52.142100 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-22 23:56:52.143995 | orchestrator | Saturday 22 March 2025 23:56:52 +0000 (0:00:00.142) 0:00:18.111 ******** 2025-03-22 23:56:52.297799 | orchestrator | ok: [testbed-node-3] => { 2025-03-22 23:56:52.298786 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-22 23:56:52.299267 | orchestrator | } 2025-03-22 23:56:52.300987 | orchestrator | 2025-03-22 23:56:52.301848 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-22 23:56:52.302762 | orchestrator | Saturday 22 March 2025 23:56:52 +0000 (0:00:00.159) 0:00:18.271 ******** 2025-03-22 23:56:53.249416 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:53.249968 | orchestrator | 2025-03-22 23:56:53.252237 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-22 23:56:53.801889 | orchestrator | Saturday 22 March 2025 23:56:53 +0000 (0:00:00.949) 0:00:19.221 ******** 2025-03-22 23:56:53.801991 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:53.802469 | orchestrator | 2025-03-22 23:56:53.804130 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-22 23:56:53.805170 | orchestrator | Saturday 22 March 2025 23:56:53 +0000 (0:00:00.551) 0:00:19.773 ******** 2025-03-22 23:56:54.375568 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:54.377056 | orchestrator | 2025-03-22 23:56:54.377092 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-22 23:56:54.378410 | orchestrator | Saturday 22 March 2025 23:56:54 +0000 (0:00:00.575) 0:00:20.349 ******** 2025-03-22 23:56:54.533239 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:54.534227 | orchestrator | 2025-03-22 23:56:54.534903 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-22 23:56:54.536373 | orchestrator | Saturday 22 March 2025 23:56:54 +0000 (0:00:00.158) 0:00:20.507 ******** 2025-03-22 23:56:54.663887 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:54.664415 | orchestrator | 2025-03-22 23:56:54.665316 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-22 23:56:54.667475 | orchestrator | Saturday 22 March 2025 23:56:54 +0000 (0:00:00.129) 0:00:20.636 ******** 2025-03-22 23:56:54.774963 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:54.775577 | orchestrator | 2025-03-22 23:56:54.778071 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-22 23:56:54.926148 | orchestrator | Saturday 22 March 2025 23:56:54 +0000 (0:00:00.109) 0:00:20.746 ******** 2025-03-22 23:56:54.926198 | orchestrator | ok: [testbed-node-3] => { 2025-03-22 23:56:54.927494 | orchestrator |  "vgs_report": { 2025-03-22 23:56:54.929224 | orchestrator |  "vg": [] 2025-03-22 23:56:54.929991 | orchestrator |  } 2025-03-22 23:56:54.931029 | orchestrator | } 2025-03-22 23:56:54.931706 | orchestrator | 2025-03-22 23:56:54.932778 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-22 23:56:54.933364 | orchestrator | Saturday 22 March 2025 23:56:54 +0000 (0:00:00.151) 0:00:20.898 ******** 2025-03-22 23:56:55.065992 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:55.066505 | orchestrator | 2025-03-22 23:56:55.067199 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-22 23:56:55.068696 | orchestrator | Saturday 22 March 2025 23:56:55 +0000 (0:00:00.139) 0:00:21.037 ******** 2025-03-22 23:56:55.207727 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:55.209228 | orchestrator | 2025-03-22 23:56:55.209686 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-22 23:56:55.209713 | orchestrator | Saturday 22 March 2025 23:56:55 +0000 (0:00:00.141) 0:00:21.178 ******** 2025-03-22 23:56:55.357246 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:55.360845 | orchestrator | 2025-03-22 23:56:55.361548 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-22 23:56:55.361611 | orchestrator | Saturday 22 March 2025 23:56:55 +0000 (0:00:00.151) 0:00:21.329 ******** 2025-03-22 23:56:55.501565 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:55.502360 | orchestrator | 2025-03-22 23:56:55.503224 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-22 23:56:55.504187 | orchestrator | Saturday 22 March 2025 23:56:55 +0000 (0:00:00.144) 0:00:21.474 ******** 2025-03-22 23:56:55.884841 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:55.885661 | orchestrator | 2025-03-22 23:56:55.886377 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-22 23:56:55.887387 | orchestrator | Saturday 22 March 2025 23:56:55 +0000 (0:00:00.379) 0:00:21.854 ******** 2025-03-22 23:56:56.043044 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.045369 | orchestrator | 2025-03-22 23:56:56.045993 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-22 23:56:56.046056 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.161) 0:00:22.016 ******** 2025-03-22 23:56:56.203431 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.204838 | orchestrator | 2025-03-22 23:56:56.205663 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-22 23:56:56.207139 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.159) 0:00:22.175 ******** 2025-03-22 23:56:56.357703 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.359315 | orchestrator | 2025-03-22 23:56:56.360242 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-22 23:56:56.363181 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.155) 0:00:22.331 ******** 2025-03-22 23:56:56.506851 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.507308 | orchestrator | 2025-03-22 23:56:56.507942 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-22 23:56:56.509202 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.148) 0:00:22.480 ******** 2025-03-22 23:56:56.674248 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.674697 | orchestrator | 2025-03-22 23:56:56.675331 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-22 23:56:56.677461 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.167) 0:00:22.647 ******** 2025-03-22 23:56:56.827442 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.829486 | orchestrator | 2025-03-22 23:56:56.976859 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-22 23:56:56.976941 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.151) 0:00:22.799 ******** 2025-03-22 23:56:56.976967 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:56.977047 | orchestrator | 2025-03-22 23:56:56.978352 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-22 23:56:56.979211 | orchestrator | Saturday 22 March 2025 23:56:56 +0000 (0:00:00.151) 0:00:22.950 ******** 2025-03-22 23:56:57.123692 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:57.123823 | orchestrator | 2025-03-22 23:56:57.124781 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-22 23:56:57.125627 | orchestrator | Saturday 22 March 2025 23:56:57 +0000 (0:00:00.147) 0:00:23.098 ******** 2025-03-22 23:56:57.283854 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:57.284348 | orchestrator | 2025-03-22 23:56:57.285162 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-22 23:56:57.286103 | orchestrator | Saturday 22 March 2025 23:56:57 +0000 (0:00:00.156) 0:00:23.255 ******** 2025-03-22 23:56:57.454478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:57.455774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:57.456211 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:57.457518 | orchestrator | 2025-03-22 23:56:57.459341 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-22 23:56:57.630256 | orchestrator | Saturday 22 March 2025 23:56:57 +0000 (0:00:00.172) 0:00:23.428 ******** 2025-03-22 23:56:57.630338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:57.631150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:57.632210 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:57.632930 | orchestrator | 2025-03-22 23:56:57.633694 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-22 23:56:57.633915 | orchestrator | Saturday 22 March 2025 23:56:57 +0000 (0:00:00.175) 0:00:23.603 ******** 2025-03-22 23:56:58.047404 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:58.047553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:58.048360 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:58.050323 | orchestrator | 2025-03-22 23:56:58.050607 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-22 23:56:58.050634 | orchestrator | Saturday 22 March 2025 23:56:58 +0000 (0:00:00.415) 0:00:24.018 ******** 2025-03-22 23:56:58.220302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:58.220834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:58.221719 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:58.222191 | orchestrator | 2025-03-22 23:56:58.224815 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-22 23:56:58.225761 | orchestrator | Saturday 22 March 2025 23:56:58 +0000 (0:00:00.174) 0:00:24.193 ******** 2025-03-22 23:56:58.416333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:58.418687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:58.418784 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:58.421550 | orchestrator | 2025-03-22 23:56:58.422131 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-22 23:56:58.422162 | orchestrator | Saturday 22 March 2025 23:56:58 +0000 (0:00:00.196) 0:00:24.390 ******** 2025-03-22 23:56:58.591400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:58.594628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:58.595273 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:58.595686 | orchestrator | 2025-03-22 23:56:58.596177 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-22 23:56:58.596204 | orchestrator | Saturday 22 March 2025 23:56:58 +0000 (0:00:00.170) 0:00:24.560 ******** 2025-03-22 23:56:58.772648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:58.774146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:58.774676 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:58.776041 | orchestrator | 2025-03-22 23:56:58.777179 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-22 23:56:58.778204 | orchestrator | Saturday 22 March 2025 23:56:58 +0000 (0:00:00.182) 0:00:24.743 ******** 2025-03-22 23:56:58.935676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:56:58.937149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:56:58.939627 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:56:58.940676 | orchestrator | 2025-03-22 23:56:58.940708 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-22 23:56:58.941724 | orchestrator | Saturday 22 March 2025 23:56:58 +0000 (0:00:00.163) 0:00:24.907 ******** 2025-03-22 23:56:59.481510 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:56:59.481672 | orchestrator | 2025-03-22 23:56:59.482735 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-22 23:57:00.023333 | orchestrator | Saturday 22 March 2025 23:56:59 +0000 (0:00:00.548) 0:00:25.455 ******** 2025-03-22 23:57:00.023450 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:57:00.025732 | orchestrator | 2025-03-22 23:57:00.027379 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-22 23:57:00.188328 | orchestrator | Saturday 22 March 2025 23:57:00 +0000 (0:00:00.539) 0:00:25.995 ******** 2025-03-22 23:57:00.188379 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:57:00.189624 | orchestrator | 2025-03-22 23:57:00.190300 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-22 23:57:00.190954 | orchestrator | Saturday 22 March 2025 23:57:00 +0000 (0:00:00.166) 0:00:26.161 ******** 2025-03-22 23:57:00.373077 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'vg_name': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'}) 2025-03-22 23:57:00.373929 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'vg_name': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'}) 2025-03-22 23:57:00.374926 | orchestrator | 2025-03-22 23:57:00.378009 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-22 23:57:00.378731 | orchestrator | Saturday 22 March 2025 23:57:00 +0000 (0:00:00.184) 0:00:26.346 ******** 2025-03-22 23:57:00.799023 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:57:00.799701 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:57:00.800337 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:57:00.803298 | orchestrator | 2025-03-22 23:57:00.803511 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-22 23:57:00.803537 | orchestrator | Saturday 22 March 2025 23:57:00 +0000 (0:00:00.426) 0:00:26.772 ******** 2025-03-22 23:57:00.981335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:57:00.982009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:57:00.982088 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:57:00.982960 | orchestrator | 2025-03-22 23:57:00.983417 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-22 23:57:00.984254 | orchestrator | Saturday 22 March 2025 23:57:00 +0000 (0:00:00.179) 0:00:26.952 ******** 2025-03-22 23:57:01.161478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75', 'data_vg': 'ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75'})  2025-03-22 23:57:01.163333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45', 'data_vg': 'ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45'})  2025-03-22 23:57:01.163913 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:57:01.164554 | orchestrator | 2025-03-22 23:57:01.165353 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-22 23:57:01.166283 | orchestrator | Saturday 22 March 2025 23:57:01 +0000 (0:00:00.182) 0:00:27.135 ******** 2025-03-22 23:57:01.801086 | orchestrator | ok: [testbed-node-3] => { 2025-03-22 23:57:01.801281 | orchestrator |  "lvm_report": { 2025-03-22 23:57:01.801821 | orchestrator |  "lv": [ 2025-03-22 23:57:01.804293 | orchestrator |  { 2025-03-22 23:57:01.805165 | orchestrator |  "lv_name": "osd-block-4729f66e-933a-5d14-9b0e-268b64ee2b75", 2025-03-22 23:57:01.806483 | orchestrator |  "vg_name": "ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75" 2025-03-22 23:57:01.806967 | orchestrator |  }, 2025-03-22 23:57:01.808046 | orchestrator |  { 2025-03-22 23:57:01.808687 | orchestrator |  "lv_name": "osd-block-9a6484f2-0da7-5943-9f10-427ab04c9a45", 2025-03-22 23:57:01.809323 | orchestrator |  "vg_name": "ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45" 2025-03-22 23:57:01.809910 | orchestrator |  } 2025-03-22 23:57:01.810418 | orchestrator |  ], 2025-03-22 23:57:01.811225 | orchestrator |  "pv": [ 2025-03-22 23:57:01.811627 | orchestrator |  { 2025-03-22 23:57:01.812300 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-22 23:57:01.812673 | orchestrator |  "vg_name": "ceph-4729f66e-933a-5d14-9b0e-268b64ee2b75" 2025-03-22 23:57:01.813037 | orchestrator |  }, 2025-03-22 23:57:01.813525 | orchestrator |  { 2025-03-22 23:57:01.814062 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-22 23:57:01.814671 | orchestrator |  "vg_name": "ceph-9a6484f2-0da7-5943-9f10-427ab04c9a45" 2025-03-22 23:57:01.815625 | orchestrator |  } 2025-03-22 23:57:01.816074 | orchestrator |  ] 2025-03-22 23:57:01.816739 | orchestrator |  } 2025-03-22 23:57:01.817445 | orchestrator | } 2025-03-22 23:57:01.817864 | orchestrator | 2025-03-22 23:57:01.818426 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-22 23:57:01.818976 | orchestrator | 2025-03-22 23:57:01.819525 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-22 23:57:01.819962 | orchestrator | Saturday 22 March 2025 23:57:01 +0000 (0:00:00.639) 0:00:27.774 ******** 2025-03-22 23:57:02.273088 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-22 23:57:02.274314 | orchestrator | 2025-03-22 23:57:02.274849 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-22 23:57:02.275713 | orchestrator | Saturday 22 March 2025 23:57:02 +0000 (0:00:00.470) 0:00:28.245 ******** 2025-03-22 23:57:02.492417 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:02.493196 | orchestrator | 2025-03-22 23:57:02.494239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:02.495118 | orchestrator | Saturday 22 March 2025 23:57:02 +0000 (0:00:00.221) 0:00:28.466 ******** 2025-03-22 23:57:02.958830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-22 23:57:02.960160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-22 23:57:02.960432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-22 23:57:02.961575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-22 23:57:02.962067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-22 23:57:02.962254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-22 23:57:02.963305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-22 23:57:02.964222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-22 23:57:02.965116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-22 23:57:02.965783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-22 23:57:02.966737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-22 23:57:02.967377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-22 23:57:02.967839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-22 23:57:02.968391 | orchestrator | 2025-03-22 23:57:02.968884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:02.969298 | orchestrator | Saturday 22 March 2025 23:57:02 +0000 (0:00:00.465) 0:00:28.931 ******** 2025-03-22 23:57:03.155290 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:03.155665 | orchestrator | 2025-03-22 23:57:03.156448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:03.157689 | orchestrator | Saturday 22 March 2025 23:57:03 +0000 (0:00:00.196) 0:00:29.128 ******** 2025-03-22 23:57:03.338716 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:03.338891 | orchestrator | 2025-03-22 23:57:03.339779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:03.341091 | orchestrator | Saturday 22 March 2025 23:57:03 +0000 (0:00:00.184) 0:00:29.313 ******** 2025-03-22 23:57:03.518157 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:03.519003 | orchestrator | 2025-03-22 23:57:03.520193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:03.521088 | orchestrator | Saturday 22 March 2025 23:57:03 +0000 (0:00:00.179) 0:00:29.492 ******** 2025-03-22 23:57:03.722439 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:03.723930 | orchestrator | 2025-03-22 23:57:03.725171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:03.725950 | orchestrator | Saturday 22 March 2025 23:57:03 +0000 (0:00:00.202) 0:00:29.695 ******** 2025-03-22 23:57:03.916924 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:03.917163 | orchestrator | 2025-03-22 23:57:03.917185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:03.919001 | orchestrator | Saturday 22 March 2025 23:57:03 +0000 (0:00:00.193) 0:00:29.889 ******** 2025-03-22 23:57:04.128324 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:04.128740 | orchestrator | 2025-03-22 23:57:04.130247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:04.132387 | orchestrator | Saturday 22 March 2025 23:57:04 +0000 (0:00:00.211) 0:00:30.100 ******** 2025-03-22 23:57:04.354206 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:04.355120 | orchestrator | 2025-03-22 23:57:04.356831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:04.357822 | orchestrator | Saturday 22 March 2025 23:57:04 +0000 (0:00:00.225) 0:00:30.326 ******** 2025-03-22 23:57:05.025198 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:05.026094 | orchestrator | 2025-03-22 23:57:05.026335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:05.026367 | orchestrator | Saturday 22 March 2025 23:57:05 +0000 (0:00:00.671) 0:00:30.997 ******** 2025-03-22 23:57:05.464985 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8830d5a0-b84d-4cff-a107-ff4c6c105a90) 2025-03-22 23:57:05.465477 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8830d5a0-b84d-4cff-a107-ff4c6c105a90) 2025-03-22 23:57:05.466413 | orchestrator | 2025-03-22 23:57:05.468502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:05.469181 | orchestrator | Saturday 22 March 2025 23:57:05 +0000 (0:00:00.439) 0:00:31.437 ******** 2025-03-22 23:57:05.957889 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b423c274-b2a0-4f0a-b616-ca1c2b60d0cd) 2025-03-22 23:57:05.958290 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b423c274-b2a0-4f0a-b616-ca1c2b60d0cd) 2025-03-22 23:57:05.958760 | orchestrator | 2025-03-22 23:57:05.959525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:05.962770 | orchestrator | Saturday 22 March 2025 23:57:05 +0000 (0:00:00.492) 0:00:31.929 ******** 2025-03-22 23:57:06.449062 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_57690c98-8cea-4402-9842-e7701133b4c4) 2025-03-22 23:57:06.450846 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_57690c98-8cea-4402-9842-e7701133b4c4) 2025-03-22 23:57:06.452135 | orchestrator | 2025-03-22 23:57:06.453801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:06.455032 | orchestrator | Saturday 22 March 2025 23:57:06 +0000 (0:00:00.491) 0:00:32.420 ******** 2025-03-22 23:57:06.972059 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_036a8c60-8400-4952-a958-bb8a1eba60c8) 2025-03-22 23:57:06.972234 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_036a8c60-8400-4952-a958-bb8a1eba60c8) 2025-03-22 23:57:06.972716 | orchestrator | 2025-03-22 23:57:06.973083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:06.973479 | orchestrator | Saturday 22 March 2025 23:57:06 +0000 (0:00:00.524) 0:00:32.946 ******** 2025-03-22 23:57:07.332947 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-22 23:57:07.333646 | orchestrator | 2025-03-22 23:57:07.335943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:07.335980 | orchestrator | Saturday 22 March 2025 23:57:07 +0000 (0:00:00.358) 0:00:33.304 ******** 2025-03-22 23:57:07.853179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-22 23:57:07.855218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-22 23:57:07.859684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-22 23:57:07.860907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-22 23:57:07.862162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-22 23:57:07.865343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-22 23:57:07.866409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-22 23:57:07.869071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-22 23:57:07.872052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-22 23:57:07.872347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-22 23:57:07.873435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-22 23:57:07.874143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-22 23:57:07.874862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-22 23:57:07.875532 | orchestrator | 2025-03-22 23:57:07.876064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:07.876535 | orchestrator | Saturday 22 March 2025 23:57:07 +0000 (0:00:00.521) 0:00:33.826 ******** 2025-03-22 23:57:08.087888 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:08.088078 | orchestrator | 2025-03-22 23:57:08.088342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:08.088785 | orchestrator | Saturday 22 March 2025 23:57:08 +0000 (0:00:00.234) 0:00:34.061 ******** 2025-03-22 23:57:08.288692 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:08.288835 | orchestrator | 2025-03-22 23:57:08.288988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:08.289314 | orchestrator | Saturday 22 March 2025 23:57:08 +0000 (0:00:00.201) 0:00:34.262 ******** 2025-03-22 23:57:08.955249 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:08.959241 | orchestrator | 2025-03-22 23:57:09.168404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:09.168534 | orchestrator | Saturday 22 March 2025 23:57:08 +0000 (0:00:00.663) 0:00:34.926 ******** 2025-03-22 23:57:09.168565 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:09.168669 | orchestrator | 2025-03-22 23:57:09.170353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:09.171521 | orchestrator | Saturday 22 March 2025 23:57:09 +0000 (0:00:00.212) 0:00:35.138 ******** 2025-03-22 23:57:09.395174 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:09.395710 | orchestrator | 2025-03-22 23:57:09.396517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:09.398090 | orchestrator | Saturday 22 March 2025 23:57:09 +0000 (0:00:00.229) 0:00:35.368 ******** 2025-03-22 23:57:09.622983 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:09.624252 | orchestrator | 2025-03-22 23:57:09.626117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:09.626183 | orchestrator | Saturday 22 March 2025 23:57:09 +0000 (0:00:00.227) 0:00:35.596 ******** 2025-03-22 23:57:09.836074 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:09.836873 | orchestrator | 2025-03-22 23:57:09.837710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:09.838431 | orchestrator | Saturday 22 March 2025 23:57:09 +0000 (0:00:00.213) 0:00:35.809 ******** 2025-03-22 23:57:10.067866 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:10.068427 | orchestrator | 2025-03-22 23:57:10.068465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:10.068718 | orchestrator | Saturday 22 March 2025 23:57:10 +0000 (0:00:00.230) 0:00:36.040 ******** 2025-03-22 23:57:10.733854 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-22 23:57:10.736944 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-22 23:57:10.737415 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-22 23:57:10.737439 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-22 23:57:10.737457 | orchestrator | 2025-03-22 23:57:10.737832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:10.738368 | orchestrator | Saturday 22 March 2025 23:57:10 +0000 (0:00:00.665) 0:00:36.705 ******** 2025-03-22 23:57:10.961277 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:10.961824 | orchestrator | 2025-03-22 23:57:10.962930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:10.964362 | orchestrator | Saturday 22 March 2025 23:57:10 +0000 (0:00:00.230) 0:00:36.935 ******** 2025-03-22 23:57:11.183179 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:11.183430 | orchestrator | 2025-03-22 23:57:11.186520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:11.186895 | orchestrator | Saturday 22 March 2025 23:57:11 +0000 (0:00:00.218) 0:00:37.154 ******** 2025-03-22 23:57:11.417785 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:11.418492 | orchestrator | 2025-03-22 23:57:11.418893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:11.420199 | orchestrator | Saturday 22 March 2025 23:57:11 +0000 (0:00:00.237) 0:00:37.391 ******** 2025-03-22 23:57:12.076766 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:12.077817 | orchestrator | 2025-03-22 23:57:12.235268 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-22 23:57:12.235307 | orchestrator | Saturday 22 March 2025 23:57:12 +0000 (0:00:00.659) 0:00:38.051 ******** 2025-03-22 23:57:12.235330 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:12.235888 | orchestrator | 2025-03-22 23:57:12.236842 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-22 23:57:12.237974 | orchestrator | Saturday 22 March 2025 23:57:12 +0000 (0:00:00.157) 0:00:38.208 ******** 2025-03-22 23:57:12.450259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '42fe63a2-cbe4-507e-bca1-965016e62eb5'}}) 2025-03-22 23:57:12.451258 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc4d25c6-b33b-5666-b63a-cd4494109919'}}) 2025-03-22 23:57:12.452394 | orchestrator | 2025-03-22 23:57:12.454778 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-22 23:57:14.672199 | orchestrator | Saturday 22 March 2025 23:57:12 +0000 (0:00:00.215) 0:00:38.423 ******** 2025-03-22 23:57:14.672314 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'}) 2025-03-22 23:57:14.672612 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'}) 2025-03-22 23:57:14.674185 | orchestrator | 2025-03-22 23:57:14.674250 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-22 23:57:14.674856 | orchestrator | Saturday 22 March 2025 23:57:14 +0000 (0:00:02.220) 0:00:40.643 ******** 2025-03-22 23:57:14.841744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:14.842822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:14.844188 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:14.845438 | orchestrator | 2025-03-22 23:57:14.846882 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-22 23:57:14.847856 | orchestrator | Saturday 22 March 2025 23:57:14 +0000 (0:00:00.170) 0:00:40.814 ******** 2025-03-22 23:57:16.194691 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'}) 2025-03-22 23:57:16.196805 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'}) 2025-03-22 23:57:16.197543 | orchestrator | 2025-03-22 23:57:16.197577 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-22 23:57:16.198475 | orchestrator | Saturday 22 March 2025 23:57:16 +0000 (0:00:01.351) 0:00:42.166 ******** 2025-03-22 23:57:16.377503 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:16.378341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:16.379157 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:16.379846 | orchestrator | 2025-03-22 23:57:16.380201 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-22 23:57:16.380790 | orchestrator | Saturday 22 March 2025 23:57:16 +0000 (0:00:00.184) 0:00:42.350 ******** 2025-03-22 23:57:16.539274 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:16.539424 | orchestrator | 2025-03-22 23:57:16.540033 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-22 23:57:16.540677 | orchestrator | Saturday 22 March 2025 23:57:16 +0000 (0:00:00.162) 0:00:42.513 ******** 2025-03-22 23:57:16.730896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:16.731004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:16.732041 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:16.733221 | orchestrator | 2025-03-22 23:57:16.734010 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-22 23:57:16.734887 | orchestrator | Saturday 22 March 2025 23:57:16 +0000 (0:00:00.189) 0:00:42.702 ******** 2025-03-22 23:57:17.100825 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:17.101102 | orchestrator | 2025-03-22 23:57:17.101731 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-22 23:57:17.102144 | orchestrator | Saturday 22 March 2025 23:57:17 +0000 (0:00:00.370) 0:00:43.073 ******** 2025-03-22 23:57:17.273793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:17.273899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:17.274736 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:17.275122 | orchestrator | 2025-03-22 23:57:17.275813 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-22 23:57:17.276306 | orchestrator | Saturday 22 March 2025 23:57:17 +0000 (0:00:00.173) 0:00:43.246 ******** 2025-03-22 23:57:17.415227 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:17.416008 | orchestrator | 2025-03-22 23:57:17.416787 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-22 23:57:17.417653 | orchestrator | Saturday 22 March 2025 23:57:17 +0000 (0:00:00.141) 0:00:43.388 ******** 2025-03-22 23:57:17.613761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:17.615139 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:17.615720 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:17.616689 | orchestrator | 2025-03-22 23:57:17.617448 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-22 23:57:17.618955 | orchestrator | Saturday 22 March 2025 23:57:17 +0000 (0:00:00.198) 0:00:43.586 ******** 2025-03-22 23:57:17.768617 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:17.769757 | orchestrator | 2025-03-22 23:57:17.771644 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-22 23:57:17.772376 | orchestrator | Saturday 22 March 2025 23:57:17 +0000 (0:00:00.154) 0:00:43.740 ******** 2025-03-22 23:57:17.969008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:17.971963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:17.971998 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:18.147748 | orchestrator | 2025-03-22 23:57:18.147797 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-22 23:57:18.147812 | orchestrator | Saturday 22 March 2025 23:57:17 +0000 (0:00:00.199) 0:00:43.940 ******** 2025-03-22 23:57:18.147834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:18.147912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:18.148574 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:18.148919 | orchestrator | 2025-03-22 23:57:18.149677 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-22 23:57:18.150309 | orchestrator | Saturday 22 March 2025 23:57:18 +0000 (0:00:00.181) 0:00:44.122 ******** 2025-03-22 23:57:18.352694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:18.353360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:18.354535 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:18.355358 | orchestrator | 2025-03-22 23:57:18.356396 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-22 23:57:18.357421 | orchestrator | Saturday 22 March 2025 23:57:18 +0000 (0:00:00.203) 0:00:44.325 ******** 2025-03-22 23:57:18.517121 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:18.518076 | orchestrator | 2025-03-22 23:57:18.518762 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-22 23:57:18.521186 | orchestrator | Saturday 22 March 2025 23:57:18 +0000 (0:00:00.161) 0:00:44.487 ******** 2025-03-22 23:57:18.664804 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:18.666399 | orchestrator | 2025-03-22 23:57:18.667279 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-22 23:57:18.668284 | orchestrator | Saturday 22 March 2025 23:57:18 +0000 (0:00:00.150) 0:00:44.638 ******** 2025-03-22 23:57:18.810154 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:18.810958 | orchestrator | 2025-03-22 23:57:18.812180 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-22 23:57:18.812855 | orchestrator | Saturday 22 March 2025 23:57:18 +0000 (0:00:00.144) 0:00:44.783 ******** 2025-03-22 23:57:18.957669 | orchestrator | ok: [testbed-node-4] => { 2025-03-22 23:57:18.958765 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-22 23:57:18.959789 | orchestrator | } 2025-03-22 23:57:18.960784 | orchestrator | 2025-03-22 23:57:18.961563 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-22 23:57:18.962524 | orchestrator | Saturday 22 March 2025 23:57:18 +0000 (0:00:00.147) 0:00:44.930 ******** 2025-03-22 23:57:19.331234 | orchestrator | ok: [testbed-node-4] => { 2025-03-22 23:57:19.332649 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-22 23:57:19.334099 | orchestrator | } 2025-03-22 23:57:19.334819 | orchestrator | 2025-03-22 23:57:19.335799 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-22 23:57:19.336678 | orchestrator | Saturday 22 March 2025 23:57:19 +0000 (0:00:00.373) 0:00:45.304 ******** 2025-03-22 23:57:19.488251 | orchestrator | ok: [testbed-node-4] => { 2025-03-22 23:57:19.490717 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-22 23:57:19.491359 | orchestrator | } 2025-03-22 23:57:19.492445 | orchestrator | 2025-03-22 23:57:19.493171 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-22 23:57:19.493648 | orchestrator | Saturday 22 March 2025 23:57:19 +0000 (0:00:00.156) 0:00:45.460 ******** 2025-03-22 23:57:20.076349 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:20.077212 | orchestrator | 2025-03-22 23:57:20.079151 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-22 23:57:20.599043 | orchestrator | Saturday 22 March 2025 23:57:20 +0000 (0:00:00.588) 0:00:46.048 ******** 2025-03-22 23:57:20.599159 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:20.600938 | orchestrator | 2025-03-22 23:57:20.602107 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-22 23:57:20.603287 | orchestrator | Saturday 22 March 2025 23:57:20 +0000 (0:00:00.523) 0:00:46.572 ******** 2025-03-22 23:57:21.166493 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:21.167066 | orchestrator | 2025-03-22 23:57:21.168296 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-22 23:57:21.169904 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.566) 0:00:47.139 ******** 2025-03-22 23:57:21.312431 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:21.312573 | orchestrator | 2025-03-22 23:57:21.315021 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-22 23:57:21.315292 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.146) 0:00:47.286 ******** 2025-03-22 23:57:21.413610 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:21.415015 | orchestrator | 2025-03-22 23:57:21.416250 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-22 23:57:21.416316 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.100) 0:00:47.387 ******** 2025-03-22 23:57:21.524516 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:21.525602 | orchestrator | 2025-03-22 23:57:21.526064 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-22 23:57:21.526536 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.111) 0:00:47.499 ******** 2025-03-22 23:57:21.669838 | orchestrator | ok: [testbed-node-4] => { 2025-03-22 23:57:21.671741 | orchestrator |  "vgs_report": { 2025-03-22 23:57:21.671851 | orchestrator |  "vg": [] 2025-03-22 23:57:21.673044 | orchestrator |  } 2025-03-22 23:57:21.673914 | orchestrator | } 2025-03-22 23:57:21.674544 | orchestrator | 2025-03-22 23:57:21.675219 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-22 23:57:21.676122 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.144) 0:00:47.643 ******** 2025-03-22 23:57:21.799900 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:21.800694 | orchestrator | 2025-03-22 23:57:21.800809 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-22 23:57:21.801207 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.129) 0:00:47.773 ******** 2025-03-22 23:57:21.920528 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:21.921668 | orchestrator | 2025-03-22 23:57:21.922633 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-22 23:57:21.923606 | orchestrator | Saturday 22 March 2025 23:57:21 +0000 (0:00:00.121) 0:00:47.895 ******** 2025-03-22 23:57:22.194134 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.196182 | orchestrator | 2025-03-22 23:57:22.196914 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-22 23:57:22.196985 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.274) 0:00:48.169 ******** 2025-03-22 23:57:22.329270 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.329789 | orchestrator | 2025-03-22 23:57:22.330461 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-22 23:57:22.331031 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.134) 0:00:48.303 ******** 2025-03-22 23:57:22.444030 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.444318 | orchestrator | 2025-03-22 23:57:22.445180 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-22 23:57:22.445860 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.114) 0:00:48.418 ******** 2025-03-22 23:57:22.566648 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.566848 | orchestrator | 2025-03-22 23:57:22.567993 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-22 23:57:22.568809 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.122) 0:00:48.540 ******** 2025-03-22 23:57:22.701108 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.702218 | orchestrator | 2025-03-22 23:57:22.703220 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-22 23:57:22.704745 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.133) 0:00:48.674 ******** 2025-03-22 23:57:22.832557 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.832795 | orchestrator | 2025-03-22 23:57:22.834147 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-22 23:57:22.835324 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.130) 0:00:48.804 ******** 2025-03-22 23:57:22.968788 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:22.969103 | orchestrator | 2025-03-22 23:57:22.971242 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-22 23:57:23.102917 | orchestrator | Saturday 22 March 2025 23:57:22 +0000 (0:00:00.138) 0:00:48.943 ******** 2025-03-22 23:57:23.103010 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:23.103710 | orchestrator | 2025-03-22 23:57:23.104058 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-22 23:57:23.104894 | orchestrator | Saturday 22 March 2025 23:57:23 +0000 (0:00:00.134) 0:00:49.077 ******** 2025-03-22 23:57:23.236241 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:23.237195 | orchestrator | 2025-03-22 23:57:23.238146 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-22 23:57:23.238704 | orchestrator | Saturday 22 March 2025 23:57:23 +0000 (0:00:00.132) 0:00:49.210 ******** 2025-03-22 23:57:23.381111 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:23.381819 | orchestrator | 2025-03-22 23:57:23.381851 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-22 23:57:23.383343 | orchestrator | Saturday 22 March 2025 23:57:23 +0000 (0:00:00.144) 0:00:49.354 ******** 2025-03-22 23:57:23.526404 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:23.527285 | orchestrator | 2025-03-22 23:57:23.528747 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-22 23:57:23.529388 | orchestrator | Saturday 22 March 2025 23:57:23 +0000 (0:00:00.146) 0:00:49.501 ******** 2025-03-22 23:57:23.672492 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:23.672727 | orchestrator | 2025-03-22 23:57:23.674133 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-22 23:57:23.675135 | orchestrator | Saturday 22 March 2025 23:57:23 +0000 (0:00:00.145) 0:00:49.646 ******** 2025-03-22 23:57:24.040434 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:24.041741 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:24.043509 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:24.044842 | orchestrator | 2025-03-22 23:57:24.045235 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-22 23:57:24.045983 | orchestrator | Saturday 22 March 2025 23:57:24 +0000 (0:00:00.366) 0:00:50.012 ******** 2025-03-22 23:57:24.204577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:24.206377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:24.207886 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:24.209261 | orchestrator | 2025-03-22 23:57:24.210330 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-22 23:57:24.210912 | orchestrator | Saturday 22 March 2025 23:57:24 +0000 (0:00:00.165) 0:00:50.178 ******** 2025-03-22 23:57:24.382518 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:24.383435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:24.384330 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:24.385193 | orchestrator | 2025-03-22 23:57:24.387715 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-22 23:57:24.388549 | orchestrator | Saturday 22 March 2025 23:57:24 +0000 (0:00:00.177) 0:00:50.356 ******** 2025-03-22 23:57:24.559377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:24.560328 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:24.560792 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:24.561266 | orchestrator | 2025-03-22 23:57:24.561941 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-22 23:57:24.561970 | orchestrator | Saturday 22 March 2025 23:57:24 +0000 (0:00:00.176) 0:00:50.533 ******** 2025-03-22 23:57:24.738294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:24.738813 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:24.739332 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:24.740735 | orchestrator | 2025-03-22 23:57:24.742442 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-22 23:57:24.742522 | orchestrator | Saturday 22 March 2025 23:57:24 +0000 (0:00:00.178) 0:00:50.711 ******** 2025-03-22 23:57:24.938892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:24.940307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:24.942875 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:25.126729 | orchestrator | 2025-03-22 23:57:25.126801 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-22 23:57:25.126819 | orchestrator | Saturday 22 March 2025 23:57:24 +0000 (0:00:00.200) 0:00:50.911 ******** 2025-03-22 23:57:25.126845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:25.128910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:25.129444 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:25.129488 | orchestrator | 2025-03-22 23:57:25.130089 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-22 23:57:25.131566 | orchestrator | Saturday 22 March 2025 23:57:25 +0000 (0:00:00.189) 0:00:51.100 ******** 2025-03-22 23:57:25.290893 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:25.291674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:25.291718 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:25.293881 | orchestrator | 2025-03-22 23:57:25.294754 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-22 23:57:25.295820 | orchestrator | Saturday 22 March 2025 23:57:25 +0000 (0:00:00.163) 0:00:51.264 ******** 2025-03-22 23:57:25.841858 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:25.842651 | orchestrator | 2025-03-22 23:57:25.844264 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-22 23:57:25.844763 | orchestrator | Saturday 22 March 2025 23:57:25 +0000 (0:00:00.550) 0:00:51.814 ******** 2025-03-22 23:57:26.373463 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:26.375280 | orchestrator | 2025-03-22 23:57:26.375955 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-22 23:57:26.376496 | orchestrator | Saturday 22 March 2025 23:57:26 +0000 (0:00:00.531) 0:00:52.346 ******** 2025-03-22 23:57:26.528763 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:57:26.529461 | orchestrator | 2025-03-22 23:57:26.530854 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-22 23:57:26.531360 | orchestrator | Saturday 22 March 2025 23:57:26 +0000 (0:00:00.156) 0:00:52.502 ******** 2025-03-22 23:57:26.942093 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'vg_name': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'}) 2025-03-22 23:57:26.942495 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'vg_name': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'}) 2025-03-22 23:57:26.943569 | orchestrator | 2025-03-22 23:57:26.946300 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-22 23:57:27.131261 | orchestrator | Saturday 22 March 2025 23:57:26 +0000 (0:00:00.411) 0:00:52.914 ******** 2025-03-22 23:57:27.131383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:27.131744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:27.132573 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:27.133377 | orchestrator | 2025-03-22 23:57:27.134705 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-22 23:57:27.134762 | orchestrator | Saturday 22 March 2025 23:57:27 +0000 (0:00:00.190) 0:00:53.104 ******** 2025-03-22 23:57:27.317408 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:27.317481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:27.318738 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:27.319767 | orchestrator | 2025-03-22 23:57:27.322407 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-22 23:57:27.322775 | orchestrator | Saturday 22 March 2025 23:57:27 +0000 (0:00:00.186) 0:00:53.290 ******** 2025-03-22 23:57:27.510415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5', 'data_vg': 'ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5'})  2025-03-22 23:57:27.511420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919', 'data_vg': 'ceph-dc4d25c6-b33b-5666-b63a-cd4494109919'})  2025-03-22 23:57:27.512490 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:57:27.513612 | orchestrator | 2025-03-22 23:57:27.515964 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-22 23:57:28.446656 | orchestrator | Saturday 22 March 2025 23:57:27 +0000 (0:00:00.193) 0:00:53.484 ******** 2025-03-22 23:57:28.446780 | orchestrator | ok: [testbed-node-4] => { 2025-03-22 23:57:28.447810 | orchestrator |  "lvm_report": { 2025-03-22 23:57:28.450213 | orchestrator |  "lv": [ 2025-03-22 23:57:28.451724 | orchestrator |  { 2025-03-22 23:57:28.451755 | orchestrator |  "lv_name": "osd-block-42fe63a2-cbe4-507e-bca1-965016e62eb5", 2025-03-22 23:57:28.452374 | orchestrator |  "vg_name": "ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5" 2025-03-22 23:57:28.452403 | orchestrator |  }, 2025-03-22 23:57:28.452999 | orchestrator |  { 2025-03-22 23:57:28.454441 | orchestrator |  "lv_name": "osd-block-dc4d25c6-b33b-5666-b63a-cd4494109919", 2025-03-22 23:57:28.455085 | orchestrator |  "vg_name": "ceph-dc4d25c6-b33b-5666-b63a-cd4494109919" 2025-03-22 23:57:28.455835 | orchestrator |  } 2025-03-22 23:57:28.456217 | orchestrator |  ], 2025-03-22 23:57:28.456679 | orchestrator |  "pv": [ 2025-03-22 23:57:28.457174 | orchestrator |  { 2025-03-22 23:57:28.457751 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-22 23:57:28.458482 | orchestrator |  "vg_name": "ceph-42fe63a2-cbe4-507e-bca1-965016e62eb5" 2025-03-22 23:57:28.458703 | orchestrator |  }, 2025-03-22 23:57:28.459780 | orchestrator |  { 2025-03-22 23:57:28.460820 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-22 23:57:28.461575 | orchestrator |  "vg_name": "ceph-dc4d25c6-b33b-5666-b63a-cd4494109919" 2025-03-22 23:57:28.462626 | orchestrator |  } 2025-03-22 23:57:28.462829 | orchestrator |  ] 2025-03-22 23:57:28.463687 | orchestrator |  } 2025-03-22 23:57:28.464493 | orchestrator | } 2025-03-22 23:57:28.464904 | orchestrator | 2025-03-22 23:57:28.465710 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-22 23:57:28.466108 | orchestrator | 2025-03-22 23:57:28.466764 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-22 23:57:28.467306 | orchestrator | Saturday 22 March 2025 23:57:28 +0000 (0:00:00.934) 0:00:54.419 ******** 2025-03-22 23:57:28.713976 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-22 23:57:28.715630 | orchestrator | 2025-03-22 23:57:28.715943 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-22 23:57:28.717315 | orchestrator | Saturday 22 March 2025 23:57:28 +0000 (0:00:00.267) 0:00:54.686 ******** 2025-03-22 23:57:28.951493 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:28.951937 | orchestrator | 2025-03-22 23:57:28.953378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:28.954394 | orchestrator | Saturday 22 March 2025 23:57:28 +0000 (0:00:00.236) 0:00:54.923 ******** 2025-03-22 23:57:29.446744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-22 23:57:29.447787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-22 23:57:29.449842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-22 23:57:29.450465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-22 23:57:29.451410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-22 23:57:29.452352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-22 23:57:29.453298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-22 23:57:29.453737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-22 23:57:29.454377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-22 23:57:29.455600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-22 23:57:29.455822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-22 23:57:29.456801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-22 23:57:29.457274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-22 23:57:29.458288 | orchestrator | 2025-03-22 23:57:29.458707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:29.459121 | orchestrator | Saturday 22 March 2025 23:57:29 +0000 (0:00:00.497) 0:00:55.420 ******** 2025-03-22 23:57:29.658487 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:29.658662 | orchestrator | 2025-03-22 23:57:29.659703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:29.660454 | orchestrator | Saturday 22 March 2025 23:57:29 +0000 (0:00:00.210) 0:00:55.631 ******** 2025-03-22 23:57:29.863671 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:29.864088 | orchestrator | 2025-03-22 23:57:29.867053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:30.078563 | orchestrator | Saturday 22 March 2025 23:57:29 +0000 (0:00:00.204) 0:00:55.836 ******** 2025-03-22 23:57:30.078680 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:30.082772 | orchestrator | 2025-03-22 23:57:30.082882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:30.321123 | orchestrator | Saturday 22 March 2025 23:57:30 +0000 (0:00:00.214) 0:00:56.050 ******** 2025-03-22 23:57:30.321235 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:30.321711 | orchestrator | 2025-03-22 23:57:30.321749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:30.324308 | orchestrator | Saturday 22 March 2025 23:57:30 +0000 (0:00:00.244) 0:00:56.294 ******** 2025-03-22 23:57:30.521540 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:30.522459 | orchestrator | 2025-03-22 23:57:30.524310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:30.525643 | orchestrator | Saturday 22 March 2025 23:57:30 +0000 (0:00:00.199) 0:00:56.494 ******** 2025-03-22 23:57:30.732827 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:30.734002 | orchestrator | 2025-03-22 23:57:30.736237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:31.156152 | orchestrator | Saturday 22 March 2025 23:57:30 +0000 (0:00:00.211) 0:00:56.705 ******** 2025-03-22 23:57:31.156299 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:31.157139 | orchestrator | 2025-03-22 23:57:31.157926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:31.158780 | orchestrator | Saturday 22 March 2025 23:57:31 +0000 (0:00:00.420) 0:00:57.126 ******** 2025-03-22 23:57:31.370125 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:31.370965 | orchestrator | 2025-03-22 23:57:31.372062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:31.374194 | orchestrator | Saturday 22 March 2025 23:57:31 +0000 (0:00:00.216) 0:00:57.343 ******** 2025-03-22 23:57:31.826881 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9e9b02d0-ba34-4a5b-a8b6-7a2befe88955) 2025-03-22 23:57:31.827515 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9e9b02d0-ba34-4a5b-a8b6-7a2befe88955) 2025-03-22 23:57:31.828700 | orchestrator | 2025-03-22 23:57:31.831471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:31.832079 | orchestrator | Saturday 22 March 2025 23:57:31 +0000 (0:00:00.457) 0:00:57.800 ******** 2025-03-22 23:57:32.285182 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_753a6438-823e-47df-a447-41be61353e18) 2025-03-22 23:57:32.285855 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_753a6438-823e-47df-a447-41be61353e18) 2025-03-22 23:57:32.288337 | orchestrator | 2025-03-22 23:57:32.288716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:32.288788 | orchestrator | Saturday 22 March 2025 23:57:32 +0000 (0:00:00.456) 0:00:58.257 ******** 2025-03-22 23:57:32.759778 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_18369482-6d33-4fed-9778-d084c11eaa5e) 2025-03-22 23:57:32.761839 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_18369482-6d33-4fed-9778-d084c11eaa5e) 2025-03-22 23:57:32.762385 | orchestrator | 2025-03-22 23:57:32.763025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:32.764118 | orchestrator | Saturday 22 March 2025 23:57:32 +0000 (0:00:00.476) 0:00:58.734 ******** 2025-03-22 23:57:33.221533 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af10e111-d90b-4be1-a196-da98d242bbc6) 2025-03-22 23:57:33.224061 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af10e111-d90b-4be1-a196-da98d242bbc6) 2025-03-22 23:57:33.225975 | orchestrator | 2025-03-22 23:57:33.226009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-22 23:57:33.226134 | orchestrator | Saturday 22 March 2025 23:57:33 +0000 (0:00:00.459) 0:00:59.193 ******** 2025-03-22 23:57:33.584970 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-22 23:57:33.585922 | orchestrator | 2025-03-22 23:57:33.586478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:33.586910 | orchestrator | Saturday 22 March 2025 23:57:33 +0000 (0:00:00.365) 0:00:59.558 ******** 2025-03-22 23:57:34.122499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-22 23:57:34.125857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-22 23:57:34.125908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-22 23:57:34.126117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-22 23:57:34.128427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-22 23:57:34.129148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-22 23:57:34.129736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-22 23:57:34.130255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-22 23:57:34.131797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-22 23:57:34.132120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-22 23:57:34.132963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-22 23:57:34.133714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-22 23:57:34.135258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-22 23:57:34.136370 | orchestrator | 2025-03-22 23:57:34.137158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:34.138118 | orchestrator | Saturday 22 March 2025 23:57:34 +0000 (0:00:00.535) 0:01:00.094 ******** 2025-03-22 23:57:34.336235 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:34.337563 | orchestrator | 2025-03-22 23:57:34.337720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:34.338308 | orchestrator | Saturday 22 March 2025 23:57:34 +0000 (0:00:00.213) 0:01:00.307 ******** 2025-03-22 23:57:34.974978 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:34.975772 | orchestrator | 2025-03-22 23:57:34.978304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:34.978373 | orchestrator | Saturday 22 March 2025 23:57:34 +0000 (0:00:00.639) 0:01:00.947 ******** 2025-03-22 23:57:35.195134 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:35.196332 | orchestrator | 2025-03-22 23:57:35.197739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:35.198901 | orchestrator | Saturday 22 March 2025 23:57:35 +0000 (0:00:00.218) 0:01:01.166 ******** 2025-03-22 23:57:35.424999 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:35.427902 | orchestrator | 2025-03-22 23:57:35.429180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:35.430857 | orchestrator | Saturday 22 March 2025 23:57:35 +0000 (0:00:00.231) 0:01:01.397 ******** 2025-03-22 23:57:35.663284 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:35.663943 | orchestrator | 2025-03-22 23:57:35.664291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:35.665282 | orchestrator | Saturday 22 March 2025 23:57:35 +0000 (0:00:00.239) 0:01:01.637 ******** 2025-03-22 23:57:35.904807 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:35.905834 | orchestrator | 2025-03-22 23:57:35.905872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:35.906330 | orchestrator | Saturday 22 March 2025 23:57:35 +0000 (0:00:00.240) 0:01:01.877 ******** 2025-03-22 23:57:36.133358 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:36.134396 | orchestrator | 2025-03-22 23:57:36.137809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:36.139217 | orchestrator | Saturday 22 March 2025 23:57:36 +0000 (0:00:00.227) 0:01:02.105 ******** 2025-03-22 23:57:36.336808 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:36.337391 | orchestrator | 2025-03-22 23:57:36.338352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:36.339212 | orchestrator | Saturday 22 March 2025 23:57:36 +0000 (0:00:00.204) 0:01:02.309 ******** 2025-03-22 23:57:37.239413 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-22 23:57:37.239562 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-22 23:57:37.240431 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-22 23:57:37.241209 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-22 23:57:37.241926 | orchestrator | 2025-03-22 23:57:37.242429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:37.242939 | orchestrator | Saturday 22 March 2025 23:57:37 +0000 (0:00:00.900) 0:01:03.210 ******** 2025-03-22 23:57:37.453109 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:37.679748 | orchestrator | 2025-03-22 23:57:37.679881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:37.679900 | orchestrator | Saturday 22 March 2025 23:57:37 +0000 (0:00:00.213) 0:01:03.423 ******** 2025-03-22 23:57:37.679933 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:37.680877 | orchestrator | 2025-03-22 23:57:37.681639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:37.684069 | orchestrator | Saturday 22 March 2025 23:57:37 +0000 (0:00:00.228) 0:01:03.652 ******** 2025-03-22 23:57:38.410562 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:38.411482 | orchestrator | 2025-03-22 23:57:38.411638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-22 23:57:38.623296 | orchestrator | Saturday 22 March 2025 23:57:38 +0000 (0:00:00.729) 0:01:04.382 ******** 2025-03-22 23:57:38.623373 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:38.624340 | orchestrator | 2025-03-22 23:57:38.625234 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-22 23:57:38.625877 | orchestrator | Saturday 22 March 2025 23:57:38 +0000 (0:00:00.215) 0:01:04.597 ******** 2025-03-22 23:57:38.786131 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:38.786222 | orchestrator | 2025-03-22 23:57:38.788261 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-22 23:57:38.789548 | orchestrator | Saturday 22 March 2025 23:57:38 +0000 (0:00:00.162) 0:01:04.759 ******** 2025-03-22 23:57:39.015257 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}}) 2025-03-22 23:57:39.015649 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54fa1689-32eb-51dd-8e84-55dfa69ec772'}}) 2025-03-22 23:57:39.017394 | orchestrator | 2025-03-22 23:57:39.018127 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-22 23:57:39.018788 | orchestrator | Saturday 22 March 2025 23:57:39 +0000 (0:00:00.226) 0:01:04.986 ******** 2025-03-22 23:57:41.222854 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}) 2025-03-22 23:57:41.224834 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'}) 2025-03-22 23:57:41.225965 | orchestrator | 2025-03-22 23:57:41.226895 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-22 23:57:41.227732 | orchestrator | Saturday 22 March 2025 23:57:41 +0000 (0:00:02.207) 0:01:07.194 ******** 2025-03-22 23:57:41.378701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:41.379416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:41.380301 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:41.380767 | orchestrator | 2025-03-22 23:57:41.381610 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-22 23:57:41.382327 | orchestrator | Saturday 22 March 2025 23:57:41 +0000 (0:00:00.158) 0:01:07.352 ******** 2025-03-22 23:57:42.809134 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}) 2025-03-22 23:57:42.809272 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'}) 2025-03-22 23:57:42.810415 | orchestrator | 2025-03-22 23:57:42.812011 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-22 23:57:42.812408 | orchestrator | Saturday 22 March 2025 23:57:42 +0000 (0:00:01.429) 0:01:08.782 ******** 2025-03-22 23:57:43.002395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:43.003395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:43.004120 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:43.004989 | orchestrator | 2025-03-22 23:57:43.005615 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-22 23:57:43.006409 | orchestrator | Saturday 22 March 2025 23:57:42 +0000 (0:00:00.194) 0:01:08.976 ******** 2025-03-22 23:57:43.146105 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:43.146223 | orchestrator | 2025-03-22 23:57:43.146880 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-22 23:57:43.147264 | orchestrator | Saturday 22 March 2025 23:57:43 +0000 (0:00:00.144) 0:01:09.121 ******** 2025-03-22 23:57:43.435799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:43.435961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:43.436854 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:43.437773 | orchestrator | 2025-03-22 23:57:43.438160 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-22 23:57:43.438661 | orchestrator | Saturday 22 March 2025 23:57:43 +0000 (0:00:00.287) 0:01:09.409 ******** 2025-03-22 23:57:43.563063 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:43.563729 | orchestrator | 2025-03-22 23:57:43.565091 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-22 23:57:43.565947 | orchestrator | Saturday 22 March 2025 23:57:43 +0000 (0:00:00.127) 0:01:09.537 ******** 2025-03-22 23:57:43.728175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:43.729445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:43.730563 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:43.731511 | orchestrator | 2025-03-22 23:57:43.732321 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-22 23:57:43.733205 | orchestrator | Saturday 22 March 2025 23:57:43 +0000 (0:00:00.164) 0:01:09.702 ******** 2025-03-22 23:57:43.868243 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:43.868382 | orchestrator | 2025-03-22 23:57:43.868724 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-22 23:57:43.869391 | orchestrator | Saturday 22 March 2025 23:57:43 +0000 (0:00:00.141) 0:01:09.843 ******** 2025-03-22 23:57:44.025733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:44.026275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:44.026976 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:44.027369 | orchestrator | 2025-03-22 23:57:44.029557 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-22 23:57:44.029760 | orchestrator | Saturday 22 March 2025 23:57:44 +0000 (0:00:00.157) 0:01:10.000 ******** 2025-03-22 23:57:44.153848 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:44.154880 | orchestrator | 2025-03-22 23:57:44.156359 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-22 23:57:44.334185 | orchestrator | Saturday 22 March 2025 23:57:44 +0000 (0:00:00.127) 0:01:10.128 ******** 2025-03-22 23:57:44.334266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:44.336730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:44.337287 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:44.337314 | orchestrator | 2025-03-22 23:57:44.337335 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-22 23:57:44.337886 | orchestrator | Saturday 22 March 2025 23:57:44 +0000 (0:00:00.176) 0:01:10.304 ******** 2025-03-22 23:57:44.508050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:44.508722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:44.509566 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:44.510079 | orchestrator | 2025-03-22 23:57:44.510563 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-22 23:57:44.512857 | orchestrator | Saturday 22 March 2025 23:57:44 +0000 (0:00:00.177) 0:01:10.482 ******** 2025-03-22 23:57:44.694250 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:44.694780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:44.695138 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:44.695427 | orchestrator | 2025-03-22 23:57:44.695800 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-22 23:57:44.696203 | orchestrator | Saturday 22 March 2025 23:57:44 +0000 (0:00:00.186) 0:01:10.668 ******** 2025-03-22 23:57:44.875511 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:44.875822 | orchestrator | 2025-03-22 23:57:44.875857 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-22 23:57:44.876529 | orchestrator | Saturday 22 March 2025 23:57:44 +0000 (0:00:00.180) 0:01:10.849 ******** 2025-03-22 23:57:45.020803 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:45.021207 | orchestrator | 2025-03-22 23:57:45.021242 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-22 23:57:45.021778 | orchestrator | Saturday 22 March 2025 23:57:45 +0000 (0:00:00.144) 0:01:10.993 ******** 2025-03-22 23:57:45.188560 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:45.188908 | orchestrator | 2025-03-22 23:57:45.189548 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-22 23:57:45.190360 | orchestrator | Saturday 22 March 2025 23:57:45 +0000 (0:00:00.168) 0:01:11.162 ******** 2025-03-22 23:57:45.563125 | orchestrator | ok: [testbed-node-5] => { 2025-03-22 23:57:45.563887 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-22 23:57:45.565082 | orchestrator | } 2025-03-22 23:57:45.565569 | orchestrator | 2025-03-22 23:57:45.566317 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-22 23:57:45.566954 | orchestrator | Saturday 22 March 2025 23:57:45 +0000 (0:00:00.374) 0:01:11.536 ******** 2025-03-22 23:57:45.709768 | orchestrator | ok: [testbed-node-5] => { 2025-03-22 23:57:45.711215 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-22 23:57:45.712308 | orchestrator | } 2025-03-22 23:57:45.715025 | orchestrator | 2025-03-22 23:57:45.869143 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-22 23:57:45.869199 | orchestrator | Saturday 22 March 2025 23:57:45 +0000 (0:00:00.147) 0:01:11.684 ******** 2025-03-22 23:57:45.869222 | orchestrator | ok: [testbed-node-5] => { 2025-03-22 23:57:45.870218 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-22 23:57:45.872753 | orchestrator | } 2025-03-22 23:57:45.873066 | orchestrator | 2025-03-22 23:57:45.873848 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-22 23:57:45.874717 | orchestrator | Saturday 22 March 2025 23:57:45 +0000 (0:00:00.157) 0:01:11.841 ******** 2025-03-22 23:57:46.418651 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:46.421955 | orchestrator | 2025-03-22 23:57:46.422605 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-22 23:57:46.423541 | orchestrator | Saturday 22 March 2025 23:57:46 +0000 (0:00:00.549) 0:01:12.391 ******** 2025-03-22 23:57:46.945428 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:46.945773 | orchestrator | 2025-03-22 23:57:46.946577 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-22 23:57:46.946856 | orchestrator | Saturday 22 March 2025 23:57:46 +0000 (0:00:00.527) 0:01:12.918 ******** 2025-03-22 23:57:47.513841 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:47.513967 | orchestrator | 2025-03-22 23:57:47.515380 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-22 23:57:47.515656 | orchestrator | Saturday 22 March 2025 23:57:47 +0000 (0:00:00.565) 0:01:13.484 ******** 2025-03-22 23:57:47.679306 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:47.680437 | orchestrator | 2025-03-22 23:57:47.681566 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-22 23:57:47.682870 | orchestrator | Saturday 22 March 2025 23:57:47 +0000 (0:00:00.167) 0:01:13.652 ******** 2025-03-22 23:57:47.815168 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:47.816707 | orchestrator | 2025-03-22 23:57:47.817667 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-22 23:57:47.819881 | orchestrator | Saturday 22 March 2025 23:57:47 +0000 (0:00:00.136) 0:01:13.788 ******** 2025-03-22 23:57:47.937175 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:47.939107 | orchestrator | 2025-03-22 23:57:47.943094 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-22 23:57:47.943457 | orchestrator | Saturday 22 March 2025 23:57:47 +0000 (0:00:00.121) 0:01:13.909 ******** 2025-03-22 23:57:48.099797 | orchestrator | ok: [testbed-node-5] => { 2025-03-22 23:57:48.100343 | orchestrator |  "vgs_report": { 2025-03-22 23:57:48.101982 | orchestrator |  "vg": [] 2025-03-22 23:57:48.103245 | orchestrator |  } 2025-03-22 23:57:48.104124 | orchestrator | } 2025-03-22 23:57:48.105083 | orchestrator | 2025-03-22 23:57:48.106113 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-22 23:57:48.106906 | orchestrator | Saturday 22 March 2025 23:57:48 +0000 (0:00:00.163) 0:01:14.073 ******** 2025-03-22 23:57:48.234685 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:48.234893 | orchestrator | 2025-03-22 23:57:48.236107 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-22 23:57:48.236658 | orchestrator | Saturday 22 March 2025 23:57:48 +0000 (0:00:00.135) 0:01:14.209 ******** 2025-03-22 23:57:48.583902 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:48.585303 | orchestrator | 2025-03-22 23:57:48.586063 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-22 23:57:48.587111 | orchestrator | Saturday 22 March 2025 23:57:48 +0000 (0:00:00.346) 0:01:14.555 ******** 2025-03-22 23:57:48.737928 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:48.740085 | orchestrator | 2025-03-22 23:57:48.740958 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-22 23:57:48.742378 | orchestrator | Saturday 22 March 2025 23:57:48 +0000 (0:00:00.154) 0:01:14.710 ******** 2025-03-22 23:57:48.892523 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:48.892647 | orchestrator | 2025-03-22 23:57:48.893972 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-22 23:57:48.895032 | orchestrator | Saturday 22 March 2025 23:57:48 +0000 (0:00:00.154) 0:01:14.864 ******** 2025-03-22 23:57:49.036030 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:49.036837 | orchestrator | 2025-03-22 23:57:49.037674 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-22 23:57:49.038572 | orchestrator | Saturday 22 March 2025 23:57:49 +0000 (0:00:00.144) 0:01:15.008 ******** 2025-03-22 23:57:49.204204 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:49.205975 | orchestrator | 2025-03-22 23:57:49.207947 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-22 23:57:49.208333 | orchestrator | Saturday 22 March 2025 23:57:49 +0000 (0:00:00.167) 0:01:15.176 ******** 2025-03-22 23:57:49.368688 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:49.369794 | orchestrator | 2025-03-22 23:57:49.371991 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-22 23:57:49.541416 | orchestrator | Saturday 22 March 2025 23:57:49 +0000 (0:00:00.165) 0:01:15.341 ******** 2025-03-22 23:57:49.541480 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:49.542125 | orchestrator | 2025-03-22 23:57:49.543268 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-22 23:57:49.543951 | orchestrator | Saturday 22 March 2025 23:57:49 +0000 (0:00:00.171) 0:01:15.512 ******** 2025-03-22 23:57:49.693529 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:49.694899 | orchestrator | 2025-03-22 23:57:49.696190 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-22 23:57:49.697606 | orchestrator | Saturday 22 March 2025 23:57:49 +0000 (0:00:00.153) 0:01:15.666 ******** 2025-03-22 23:57:49.849812 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:49.850668 | orchestrator | 2025-03-22 23:57:49.852302 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-22 23:57:49.853136 | orchestrator | Saturday 22 March 2025 23:57:49 +0000 (0:00:00.156) 0:01:15.823 ******** 2025-03-22 23:57:50.049608 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:50.050215 | orchestrator | 2025-03-22 23:57:50.050720 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-22 23:57:50.051279 | orchestrator | Saturday 22 March 2025 23:57:50 +0000 (0:00:00.198) 0:01:16.022 ******** 2025-03-22 23:57:50.209823 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:50.209997 | orchestrator | 2025-03-22 23:57:50.210273 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-22 23:57:50.210460 | orchestrator | Saturday 22 March 2025 23:57:50 +0000 (0:00:00.161) 0:01:16.183 ******** 2025-03-22 23:57:50.357507 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:50.358101 | orchestrator | 2025-03-22 23:57:50.358856 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-22 23:57:50.361265 | orchestrator | Saturday 22 March 2025 23:57:50 +0000 (0:00:00.147) 0:01:16.330 ******** 2025-03-22 23:57:50.855850 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:50.856363 | orchestrator | 2025-03-22 23:57:50.858769 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-22 23:57:51.088321 | orchestrator | Saturday 22 March 2025 23:57:50 +0000 (0:00:00.499) 0:01:16.829 ******** 2025-03-22 23:57:51.088419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:51.090519 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:51.090537 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:51.090551 | orchestrator | 2025-03-22 23:57:51.090861 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-22 23:57:51.091802 | orchestrator | Saturday 22 March 2025 23:57:51 +0000 (0:00:00.228) 0:01:17.058 ******** 2025-03-22 23:57:51.291307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:51.291468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:51.292261 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:51.294079 | orchestrator | 2025-03-22 23:57:51.294786 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-22 23:57:51.296209 | orchestrator | Saturday 22 March 2025 23:57:51 +0000 (0:00:00.204) 0:01:17.263 ******** 2025-03-22 23:57:51.485062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:51.485248 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:51.485278 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:51.487908 | orchestrator | 2025-03-22 23:57:51.489739 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-22 23:57:51.490156 | orchestrator | Saturday 22 March 2025 23:57:51 +0000 (0:00:00.193) 0:01:17.456 ******** 2025-03-22 23:57:51.657281 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:51.658112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:51.659034 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:51.659512 | orchestrator | 2025-03-22 23:57:51.660031 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-22 23:57:51.660419 | orchestrator | Saturday 22 March 2025 23:57:51 +0000 (0:00:00.171) 0:01:17.628 ******** 2025-03-22 23:57:51.863515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:51.863892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:51.866328 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:51.868817 | orchestrator | 2025-03-22 23:57:51.869138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-22 23:57:51.869168 | orchestrator | Saturday 22 March 2025 23:57:51 +0000 (0:00:00.208) 0:01:17.836 ******** 2025-03-22 23:57:52.061936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:52.062184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:52.063302 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:52.064079 | orchestrator | 2025-03-22 23:57:52.064286 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-22 23:57:52.065077 | orchestrator | Saturday 22 March 2025 23:57:52 +0000 (0:00:00.198) 0:01:18.034 ******** 2025-03-22 23:57:52.284395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:52.285826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:52.286661 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:52.287444 | orchestrator | 2025-03-22 23:57:52.287474 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-22 23:57:52.287716 | orchestrator | Saturday 22 March 2025 23:57:52 +0000 (0:00:00.222) 0:01:18.256 ******** 2025-03-22 23:57:52.488641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:52.489041 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:52.490355 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:52.491037 | orchestrator | 2025-03-22 23:57:52.491846 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-22 23:57:52.492459 | orchestrator | Saturday 22 March 2025 23:57:52 +0000 (0:00:00.203) 0:01:18.459 ******** 2025-03-22 23:57:53.028516 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:53.028736 | orchestrator | 2025-03-22 23:57:53.029179 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-22 23:57:53.029975 | orchestrator | Saturday 22 March 2025 23:57:53 +0000 (0:00:00.541) 0:01:19.001 ******** 2025-03-22 23:57:53.644200 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:53.645068 | orchestrator | 2025-03-22 23:57:53.646374 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-22 23:57:53.647177 | orchestrator | Saturday 22 March 2025 23:57:53 +0000 (0:00:00.614) 0:01:19.615 ******** 2025-03-22 23:57:54.145477 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:57:54.146377 | orchestrator | 2025-03-22 23:57:54.148658 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-22 23:57:54.149814 | orchestrator | Saturday 22 March 2025 23:57:54 +0000 (0:00:00.503) 0:01:20.119 ******** 2025-03-22 23:57:54.358699 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'vg_name': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'}) 2025-03-22 23:57:54.359811 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'vg_name': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'}) 2025-03-22 23:57:54.360699 | orchestrator | 2025-03-22 23:57:54.363897 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-22 23:57:54.570408 | orchestrator | Saturday 22 March 2025 23:57:54 +0000 (0:00:00.212) 0:01:20.331 ******** 2025-03-22 23:57:54.570481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:54.570965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:54.572149 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:54.576095 | orchestrator | 2025-03-22 23:57:54.576479 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-22 23:57:54.577407 | orchestrator | Saturday 22 March 2025 23:57:54 +0000 (0:00:00.211) 0:01:20.542 ******** 2025-03-22 23:57:54.759522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:54.760848 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:54.761904 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:54.763221 | orchestrator | 2025-03-22 23:57:54.764314 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-22 23:57:54.764858 | orchestrator | Saturday 22 March 2025 23:57:54 +0000 (0:00:00.189) 0:01:20.732 ******** 2025-03-22 23:57:54.954685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568', 'data_vg': 'ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568'})  2025-03-22 23:57:54.955409 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772', 'data_vg': 'ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772'})  2025-03-22 23:57:54.956334 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:57:54.956368 | orchestrator | 2025-03-22 23:57:54.957882 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-22 23:57:54.959829 | orchestrator | Saturday 22 March 2025 23:57:54 +0000 (0:00:00.194) 0:01:20.927 ******** 2025-03-22 23:57:55.410432 | orchestrator | ok: [testbed-node-5] => { 2025-03-22 23:57:55.410651 | orchestrator |  "lvm_report": { 2025-03-22 23:57:55.413701 | orchestrator |  "lv": [ 2025-03-22 23:57:55.414844 | orchestrator |  { 2025-03-22 23:57:55.416361 | orchestrator |  "lv_name": "osd-block-54fa1689-32eb-51dd-8e84-55dfa69ec772", 2025-03-22 23:57:55.417859 | orchestrator |  "vg_name": "ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772" 2025-03-22 23:57:55.419057 | orchestrator |  }, 2025-03-22 23:57:55.419913 | orchestrator |  { 2025-03-22 23:57:55.420965 | orchestrator |  "lv_name": "osd-block-cd0beb18-7cb7-5f7e-bb8d-a321f863b568", 2025-03-22 23:57:55.422334 | orchestrator |  "vg_name": "ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568" 2025-03-22 23:57:55.423655 | orchestrator |  } 2025-03-22 23:57:55.425437 | orchestrator |  ], 2025-03-22 23:57:55.426145 | orchestrator |  "pv": [ 2025-03-22 23:57:55.427231 | orchestrator |  { 2025-03-22 23:57:55.428944 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-22 23:57:55.431964 | orchestrator |  "vg_name": "ceph-cd0beb18-7cb7-5f7e-bb8d-a321f863b568" 2025-03-22 23:57:55.433167 | orchestrator |  }, 2025-03-22 23:57:55.433843 | orchestrator |  { 2025-03-22 23:57:55.434643 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-22 23:57:55.435521 | orchestrator |  "vg_name": "ceph-54fa1689-32eb-51dd-8e84-55dfa69ec772" 2025-03-22 23:57:55.436071 | orchestrator |  } 2025-03-22 23:57:55.436744 | orchestrator |  ] 2025-03-22 23:57:55.437647 | orchestrator |  } 2025-03-22 23:57:55.437906 | orchestrator | } 2025-03-22 23:57:55.438543 | orchestrator | 2025-03-22 23:57:55.441049 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 23:57:55.441164 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-22 23:57:55.441203 | orchestrator | 2025-03-22 23:57:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 23:57:55.441706 | orchestrator | 2025-03-22 23:57:55 | INFO  | Please wait and do not abort execution. 2025-03-22 23:57:55.441737 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-22 23:57:55.442219 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-22 23:57:55.442892 | orchestrator | 2025-03-22 23:57:55.443392 | orchestrator | 2025-03-22 23:57:55.443869 | orchestrator | 2025-03-22 23:57:55.444732 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 23:57:55.445698 | orchestrator | Saturday 22 March 2025 23:57:55 +0000 (0:00:00.456) 0:01:21.383 ******** 2025-03-22 23:57:55.446101 | orchestrator | =============================================================================== 2025-03-22 23:57:55.446603 | orchestrator | Create block VGs -------------------------------------------------------- 6.77s 2025-03-22 23:57:55.447279 | orchestrator | Create block LVs -------------------------------------------------------- 4.42s 2025-03-22 23:57:55.447848 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.09s 2025-03-22 23:57:55.448139 | orchestrator | Print LVM report data --------------------------------------------------- 2.03s 2025-03-22 23:57:55.448569 | orchestrator | Add known links to the list of available block devices ------------------ 1.89s 2025-03-22 23:57:55.448995 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.71s 2025-03-22 23:57:55.450270 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.69s 2025-03-22 23:57:55.450978 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2025-03-22 23:57:55.452734 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2025-03-22 23:57:55.453737 | orchestrator | Add known partitions to the list of available block devices ------------- 1.56s 2025-03-22 23:57:55.455276 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.99s 2025-03-22 23:57:55.456484 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2025-03-22 23:57:55.457869 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-03-22 23:57:55.458574 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.86s 2025-03-22 23:57:55.459073 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.83s 2025-03-22 23:57:55.460036 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.83s 2025-03-22 23:57:55.461123 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.81s 2025-03-22 23:57:55.461299 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_wal_devices --------------------- 0.80s 2025-03-22 23:57:55.462154 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.79s 2025-03-22 23:57:55.462951 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.77s 2025-03-22 23:57:57.795018 | orchestrator | 2025-03-22 23:57:57 | INFO  | Task a451c431-060c-4883-b4af-100d3e0b0ae1 (facts) was prepared for execution. 2025-03-22 23:58:01.229133 | orchestrator | 2025-03-22 23:57:57 | INFO  | It takes a moment until task a451c431-060c-4883-b4af-100d3e0b0ae1 (facts) has been started and output is visible here. 2025-03-22 23:58:01.229241 | orchestrator | 2025-03-22 23:58:01.229753 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-22 23:58:01.231048 | orchestrator | 2025-03-22 23:58:01.232723 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-22 23:58:01.233319 | orchestrator | Saturday 22 March 2025 23:58:01 +0000 (0:00:00.229) 0:00:00.229 ******** 2025-03-22 23:58:02.819242 | orchestrator | ok: [testbed-manager] 2025-03-22 23:58:02.820957 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:58:02.821003 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:58:02.825320 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:58:02.825853 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:58:02.828085 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:58:02.829028 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:58:02.829792 | orchestrator | 2025-03-22 23:58:02.830631 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-22 23:58:02.831208 | orchestrator | Saturday 22 March 2025 23:58:02 +0000 (0:00:01.588) 0:00:01.818 ******** 2025-03-22 23:58:02.986092 | orchestrator | skipping: [testbed-manager] 2025-03-22 23:58:03.078129 | orchestrator | skipping: [testbed-node-0] 2025-03-22 23:58:03.159773 | orchestrator | skipping: [testbed-node-1] 2025-03-22 23:58:03.247853 | orchestrator | skipping: [testbed-node-2] 2025-03-22 23:58:03.330234 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:58:04.207860 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:58:04.209468 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:58:04.211272 | orchestrator | 2025-03-22 23:58:04.214263 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-22 23:58:04.214658 | orchestrator | 2025-03-22 23:58:04.215158 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-22 23:58:04.217966 | orchestrator | Saturday 22 March 2025 23:58:04 +0000 (0:00:01.395) 0:00:03.213 ******** 2025-03-22 23:58:09.278167 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:58:09.278976 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:58:09.280946 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:58:09.281885 | orchestrator | ok: [testbed-manager] 2025-03-22 23:58:09.283680 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:58:09.284553 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:58:09.285304 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:58:09.286103 | orchestrator | 2025-03-22 23:58:09.286870 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-22 23:58:09.289003 | orchestrator | 2025-03-22 23:58:09.290078 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-22 23:58:09.290735 | orchestrator | Saturday 22 March 2025 23:58:09 +0000 (0:00:05.070) 0:00:08.283 ******** 2025-03-22 23:58:09.438199 | orchestrator | skipping: [testbed-manager] 2025-03-22 23:58:09.515812 | orchestrator | skipping: [testbed-node-0] 2025-03-22 23:58:09.592668 | orchestrator | skipping: [testbed-node-1] 2025-03-22 23:58:09.671292 | orchestrator | skipping: [testbed-node-2] 2025-03-22 23:58:09.751527 | orchestrator | skipping: [testbed-node-3] 2025-03-22 23:58:09.795069 | orchestrator | skipping: [testbed-node-4] 2025-03-22 23:58:09.795425 | orchestrator | skipping: [testbed-node-5] 2025-03-22 23:58:09.796787 | orchestrator | 2025-03-22 23:58:09.800440 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 23:58:09.800894 | orchestrator | 2025-03-22 23:58:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-22 23:58:09.801685 | orchestrator | 2025-03-22 23:58:09 | INFO  | Please wait and do not abort execution. 2025-03-22 23:58:09.802858 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.803913 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.804635 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.805545 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.806683 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.807396 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.808137 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-22 23:58:09.808653 | orchestrator | 2025-03-22 23:58:09.809439 | orchestrator | 2025-03-22 23:58:09.810768 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 23:58:09.811078 | orchestrator | Saturday 22 March 2025 23:58:09 +0000 (0:00:00.516) 0:00:08.800 ******** 2025-03-22 23:58:09.811921 | orchestrator | =============================================================================== 2025-03-22 23:58:09.812506 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.07s 2025-03-22 23:58:09.812960 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.59s 2025-03-22 23:58:09.813666 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.40s 2025-03-22 23:58:09.814284 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-03-22 23:58:10.447871 | orchestrator | 2025-03-22 23:58:10.451735 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Mar 22 23:58:10 UTC 2025 2025-03-22 23:58:11.869859 | orchestrator | 2025-03-22 23:58:11.869937 | orchestrator | 2025-03-22 23:58:11 | INFO  | Collection nutshell is prepared for execution 2025-03-22 23:58:11.871560 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [0] - dotfiles 2025-03-22 23:58:11.871649 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [0] - homer 2025-03-22 23:58:11.872827 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [0] - netdata 2025-03-22 23:58:11.872856 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [0] - openstackclient 2025-03-22 23:58:11.872872 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [0] - phpmyadmin 2025-03-22 23:58:11.872887 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [0] - common 2025-03-22 23:58:11.872907 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [1] -- loadbalancer 2025-03-22 23:58:11.873359 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [2] --- opensearch 2025-03-22 23:58:11.873387 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [2] --- mariadb-ng 2025-03-22 23:58:11.873402 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [3] ---- horizon 2025-03-22 23:58:11.873416 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [3] ---- keystone 2025-03-22 23:58:11.873430 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [4] ----- neutron 2025-03-22 23:58:11.873445 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [5] ------ wait-for-nova 2025-03-22 23:58:11.873460 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [5] ------ octavia 2025-03-22 23:58:11.873480 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [4] ----- barbican 2025-03-22 23:58:11.873660 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [4] ----- designate 2025-03-22 23:58:11.873685 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [4] ----- ironic 2025-03-22 23:58:11.873731 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [4] ----- placement 2025-03-22 23:58:11.873746 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [4] ----- magnum 2025-03-22 23:58:11.873765 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [1] -- openvswitch 2025-03-22 23:58:11.873825 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [2] --- ovn 2025-03-22 23:58:11.873847 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [1] -- memcached 2025-03-22 23:58:11.873929 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [1] -- redis 2025-03-22 23:58:11.873948 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [1] -- rabbitmq-ng 2025-03-22 23:58:11.873966 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [0] - kubernetes 2025-03-22 23:58:11.874218 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [1] -- kubeconfig 2025-03-22 23:58:11.874418 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [1] -- copy-kubeconfig 2025-03-22 23:58:11.874448 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [0] - ceph 2025-03-22 23:58:11.875392 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [1] -- ceph-pools 2025-03-22 23:58:11.875491 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [2] --- copy-ceph-keys 2025-03-22 23:58:11.875515 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [3] ---- cephclient 2025-03-22 23:58:11.875577 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-03-22 23:58:11.875690 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [4] ----- wait-for-keystone 2025-03-22 23:58:11.875709 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [5] ------ kolla-ceph-rgw 2025-03-22 23:58:11.875728 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [5] ------ glance 2025-03-22 23:58:11.876105 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [5] ------ cinder 2025-03-22 23:58:11.876131 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [5] ------ nova 2025-03-22 23:58:11.876151 | orchestrator | 2025-03-22 23:58:11 | INFO  | A [4] ----- prometheus 2025-03-22 23:58:12.042491 | orchestrator | 2025-03-22 23:58:11 | INFO  | D [5] ------ grafana 2025-03-22 23:58:12.042570 | orchestrator | 2025-03-22 23:58:12 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-03-22 23:58:13.883711 | orchestrator | 2025-03-22 23:58:12 | INFO  | Tasks are running in the background 2025-03-22 23:58:13.883862 | orchestrator | 2025-03-22 23:58:13 | INFO  | No task IDs specified, wait for all currently running tasks 2025-03-22 23:58:15.991530 | orchestrator | 2025-03-22 23:58:15 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:15.992123 | orchestrator | 2025-03-22 23:58:15 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:15.993510 | orchestrator | 2025-03-22 23:58:15 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:15.994381 | orchestrator | 2025-03-22 23:58:15 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:15.997787 | orchestrator | 2025-03-22 23:58:15 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:15.998429 | orchestrator | 2025-03-22 23:58:15 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:15.998507 | orchestrator | 2025-03-22 23:58:15 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:19.039477 | orchestrator | 2025-03-22 23:58:19 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:19.041098 | orchestrator | 2025-03-22 23:58:19 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:19.042645 | orchestrator | 2025-03-22 23:58:19 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:19.044833 | orchestrator | 2025-03-22 23:58:19 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:19.045544 | orchestrator | 2025-03-22 23:58:19 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:19.046360 | orchestrator | 2025-03-22 23:58:19 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:22.098514 | orchestrator | 2025-03-22 23:58:19 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:22.098692 | orchestrator | 2025-03-22 23:58:22 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:22.098784 | orchestrator | 2025-03-22 23:58:22 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:22.104739 | orchestrator | 2025-03-22 23:58:22 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:25.150984 | orchestrator | 2025-03-22 23:58:22 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:25.151093 | orchestrator | 2025-03-22 23:58:22 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:25.151113 | orchestrator | 2025-03-22 23:58:22 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:25.151129 | orchestrator | 2025-03-22 23:58:22 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:25.151162 | orchestrator | 2025-03-22 23:58:25 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:25.151233 | orchestrator | 2025-03-22 23:58:25 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:25.151270 | orchestrator | 2025-03-22 23:58:25 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:25.151285 | orchestrator | 2025-03-22 23:58:25 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:25.151305 | orchestrator | 2025-03-22 23:58:25 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:25.154013 | orchestrator | 2025-03-22 23:58:25 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:28.218661 | orchestrator | 2025-03-22 23:58:25 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:28.218753 | orchestrator | 2025-03-22 23:58:28 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:28.225759 | orchestrator | 2025-03-22 23:58:28 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:28.226850 | orchestrator | 2025-03-22 23:58:28 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:28.235759 | orchestrator | 2025-03-22 23:58:28 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:28.237485 | orchestrator | 2025-03-22 23:58:28 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:28.237512 | orchestrator | 2025-03-22 23:58:28 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:31.281037 | orchestrator | 2025-03-22 23:58:28 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:31.281127 | orchestrator | 2025-03-22 23:58:31 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:31.283669 | orchestrator | 2025-03-22 23:58:31 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:31.283960 | orchestrator | 2025-03-22 23:58:31 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:31.283973 | orchestrator | 2025-03-22 23:58:31 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:31.284506 | orchestrator | 2025-03-22 23:58:31 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:31.287481 | orchestrator | 2025-03-22 23:58:31 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:34.341287 | orchestrator | 2025-03-22 23:58:31 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:34.341421 | orchestrator | 2025-03-22 23:58:34 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:34.345836 | orchestrator | 2025-03-22 23:58:34 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:34.345867 | orchestrator | 2025-03-22 23:58:34 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:34.345890 | orchestrator | 2025-03-22 23:58:34 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:34.346278 | orchestrator | 2025-03-22 23:58:34 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:34.360695 | orchestrator | 2025-03-22 23:58:34 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:37.432213 | orchestrator | 2025-03-22 23:58:34 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:37.432352 | orchestrator | 2025-03-22 23:58:37 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state STARTED 2025-03-22 23:58:37.441410 | orchestrator | 2025-03-22 23:58:37 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:37.447405 | orchestrator | 2025-03-22 23:58:37 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:37.450197 | orchestrator | 2025-03-22 23:58:37 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:37.456849 | orchestrator | 2025-03-22 23:58:37 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:40.616393 | orchestrator | 2025-03-22 23:58:37 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:40.616510 | orchestrator | 2025-03-22 23:58:37 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:40.616548 | orchestrator | 2025-03-22 23:58:40.616564 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-03-22 23:58:40.616579 | orchestrator | 2025-03-22 23:58:40.616666 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-03-22 23:58:40.616689 | orchestrator | Saturday 22 March 2025 23:58:21 +0000 (0:00:00.310) 0:00:00.310 ******** 2025-03-22 23:58:40.616704 | orchestrator | changed: [testbed-manager] 2025-03-22 23:58:40.616719 | orchestrator | changed: [testbed-node-1] 2025-03-22 23:58:40.616733 | orchestrator | changed: [testbed-node-0] 2025-03-22 23:58:40.616747 | orchestrator | changed: [testbed-node-2] 2025-03-22 23:58:40.616760 | orchestrator | changed: [testbed-node-3] 2025-03-22 23:58:40.616774 | orchestrator | changed: [testbed-node-4] 2025-03-22 23:58:40.616788 | orchestrator | changed: [testbed-node-5] 2025-03-22 23:58:40.616802 | orchestrator | 2025-03-22 23:58:40.616816 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-03-22 23:58:40.616830 | orchestrator | Saturday 22 March 2025 23:58:25 +0000 (0:00:03.395) 0:00:03.706 ******** 2025-03-22 23:58:40.616844 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-22 23:58:40.616858 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-22 23:58:40.616878 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-22 23:58:40.616892 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-22 23:58:40.616905 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-22 23:58:40.616919 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-22 23:58:40.616935 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-22 23:58:40.616951 | orchestrator | 2025-03-22 23:58:40.616966 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-03-22 23:58:40.616981 | orchestrator | Saturday 22 March 2025 23:58:27 +0000 (0:00:02.849) 0:00:06.558 ******** 2025-03-22 23:58:40.617001 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:25.888739', 'end': '2025-03-22 23:58:25.892139', 'delta': '0:00:00.003400', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617026 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:25.930937', 'end': '2025-03-22 23:58:25.938739', 'delta': '0:00:00.007802', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617043 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:25.880026', 'end': '2025-03-22 23:58:25.890094', 'delta': '0:00:00.010068', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617094 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:26.133625', 'end': '2025-03-22 23:58:27.145619', 'delta': '0:00:01.011994', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617111 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:26.387448', 'end': '2025-03-22 23:58:26.393353', 'delta': '0:00:00.005905', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617127 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:26.678397', 'end': '2025-03-22 23:58:27.688676', 'delta': '0:00:01.010279', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617148 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-22 23:58:26.794756', 'end': '2025-03-22 23:58:26.803098', 'delta': '0:00:00.008342', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-22 23:58:40.617164 | orchestrator | 2025-03-22 23:58:40.617179 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-03-22 23:58:40.617196 | orchestrator | Saturday 22 March 2025 23:58:30 +0000 (0:00:02.522) 0:00:09.081 ******** 2025-03-22 23:58:40.617212 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-22 23:58:40.617227 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-22 23:58:40.617243 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-22 23:58:40.617264 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-22 23:58:40.617280 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-22 23:58:40.617296 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-22 23:58:40.617309 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-22 23:58:40.617323 | orchestrator | 2025-03-22 23:58:40.617338 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-03-22 23:58:40.617352 | orchestrator | Saturday 22 March 2025 23:58:32 +0000 (0:00:01.791) 0:00:10.873 ******** 2025-03-22 23:58:40.617366 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-03-22 23:58:40.617379 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-03-22 23:58:40.617393 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-03-22 23:58:40.617407 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-03-22 23:58:40.617420 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-03-22 23:58:40.617434 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-03-22 23:58:40.617448 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-03-22 23:58:40.617461 | orchestrator | 2025-03-22 23:58:40.617475 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 23:58:40.617496 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617538 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617554 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617568 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617582 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617617 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617632 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:58:40.617645 | orchestrator | 2025-03-22 23:58:40.617659 | orchestrator | 2025-03-22 23:58:40.617673 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 23:58:40.617687 | orchestrator | Saturday 22 March 2025 23:58:36 +0000 (0:00:04.208) 0:00:15.081 ******** 2025-03-22 23:58:40.617701 | orchestrator | =============================================================================== 2025-03-22 23:58:40.617714 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.21s 2025-03-22 23:58:40.617728 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.40s 2025-03-22 23:58:40.617742 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.85s 2025-03-22 23:58:40.617756 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.52s 2025-03-22 23:58:40.617769 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.79s 2025-03-22 23:58:40.617788 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:40.620739 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task d718789a-b3d0-4cce-b98c-926521892d41 is in state SUCCESS 2025-03-22 23:58:40.620782 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:40.626103 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:40.637328 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:40.641200 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:40.644708 | orchestrator | 2025-03-22 23:58:40 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:43.722096 | orchestrator | 2025-03-22 23:58:40 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:43.722238 | orchestrator | 2025-03-22 23:58:43 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:43.724104 | orchestrator | 2025-03-22 23:58:43 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:43.729924 | orchestrator | 2025-03-22 23:58:43 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:43.733173 | orchestrator | 2025-03-22 23:58:43 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:43.739956 | orchestrator | 2025-03-22 23:58:43 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:43.743935 | orchestrator | 2025-03-22 23:58:43 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:46.830699 | orchestrator | 2025-03-22 23:58:43 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:46.830840 | orchestrator | 2025-03-22 23:58:46 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:46.847260 | orchestrator | 2025-03-22 23:58:46 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:46.847486 | orchestrator | 2025-03-22 23:58:46 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:46.847515 | orchestrator | 2025-03-22 23:58:46 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:46.847531 | orchestrator | 2025-03-22 23:58:46 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:46.847552 | orchestrator | 2025-03-22 23:58:46 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:49.911805 | orchestrator | 2025-03-22 23:58:46 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:49.911944 | orchestrator | 2025-03-22 23:58:49 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:49.915908 | orchestrator | 2025-03-22 23:58:49 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:49.921034 | orchestrator | 2025-03-22 23:58:49 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:49.922141 | orchestrator | 2025-03-22 23:58:49 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:49.927752 | orchestrator | 2025-03-22 23:58:49 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:49.928704 | orchestrator | 2025-03-22 23:58:49 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:53.009460 | orchestrator | 2025-03-22 23:58:49 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:53.009581 | orchestrator | 2025-03-22 23:58:53 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:53.011386 | orchestrator | 2025-03-22 23:58:53 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:53.021310 | orchestrator | 2025-03-22 23:58:53 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:53.021745 | orchestrator | 2025-03-22 23:58:53 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:53.021772 | orchestrator | 2025-03-22 23:58:53 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:53.021789 | orchestrator | 2025-03-22 23:58:53 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state STARTED 2025-03-22 23:58:56.103364 | orchestrator | 2025-03-22 23:58:53 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:56.103482 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:58:56.106912 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:56.109489 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:56.109518 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:56.109700 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:56.110792 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:58:56.111674 | orchestrator | 2025-03-22 23:58:56 | INFO  | Task 2e727d3e-f41f-4329-be45-75f35866aee0 is in state SUCCESS 2025-03-22 23:58:59.213379 | orchestrator | 2025-03-22 23:58:56 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:58:59.213584 | orchestrator | 2025-03-22 23:58:59 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:58:59.213730 | orchestrator | 2025-03-22 23:58:59 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:58:59.220418 | orchestrator | 2025-03-22 23:58:59 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:58:59.223186 | orchestrator | 2025-03-22 23:58:59 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:58:59.230299 | orchestrator | 2025-03-22 23:58:59 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:58:59.234077 | orchestrator | 2025-03-22 23:58:59 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:02.312891 | orchestrator | 2025-03-22 23:58:59 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:02.313010 | orchestrator | 2025-03-22 23:59:02 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:02.327906 | orchestrator | 2025-03-22 23:59:02 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:02.328029 | orchestrator | 2025-03-22 23:59:02 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:05.379542 | orchestrator | 2025-03-22 23:59:02 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:05.379740 | orchestrator | 2025-03-22 23:59:02 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:59:05.379781 | orchestrator | 2025-03-22 23:59:02 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:05.379796 | orchestrator | 2025-03-22 23:59:02 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:05.379826 | orchestrator | 2025-03-22 23:59:05 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:05.379911 | orchestrator | 2025-03-22 23:59:05 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:05.380558 | orchestrator | 2025-03-22 23:59:05 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:05.381386 | orchestrator | 2025-03-22 23:59:05 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:05.381835 | orchestrator | 2025-03-22 23:59:05 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:59:05.382851 | orchestrator | 2025-03-22 23:59:05 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:08.457153 | orchestrator | 2025-03-22 23:59:05 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:08.457280 | orchestrator | 2025-03-22 23:59:08 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:08.457488 | orchestrator | 2025-03-22 23:59:08 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:08.457518 | orchestrator | 2025-03-22 23:59:08 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:08.458537 | orchestrator | 2025-03-22 23:59:08 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:08.459221 | orchestrator | 2025-03-22 23:59:08 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state STARTED 2025-03-22 23:59:08.461853 | orchestrator | 2025-03-22 23:59:08 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:11.524214 | orchestrator | 2025-03-22 23:59:08 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:11.524332 | orchestrator | 2025-03-22 23:59:11 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:14.585639 | orchestrator | 2025-03-22 23:59:11 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:14.585759 | orchestrator | 2025-03-22 23:59:11 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:14.585778 | orchestrator | 2025-03-22 23:59:11 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:14.585793 | orchestrator | 2025-03-22 23:59:11 | INFO  | Task 7b5f9516-5f9a-40e8-bb6e-b6aac5457c70 is in state SUCCESS 2025-03-22 23:59:14.585808 | orchestrator | 2025-03-22 23:59:11 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:14.585822 | orchestrator | 2025-03-22 23:59:11 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:14.585855 | orchestrator | 2025-03-22 23:59:14 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:14.585992 | orchestrator | 2025-03-22 23:59:14 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:14.587205 | orchestrator | 2025-03-22 23:59:14 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:14.589615 | orchestrator | 2025-03-22 23:59:14 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:14.594202 | orchestrator | 2025-03-22 23:59:14 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:17.639127 | orchestrator | 2025-03-22 23:59:14 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:17.639270 | orchestrator | 2025-03-22 23:59:17 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:17.648796 | orchestrator | 2025-03-22 23:59:17 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:17.648876 | orchestrator | 2025-03-22 23:59:17 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:17.656243 | orchestrator | 2025-03-22 23:59:17 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:17.659670 | orchestrator | 2025-03-22 23:59:17 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:20.705989 | orchestrator | 2025-03-22 23:59:17 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:20.706172 | orchestrator | 2025-03-22 23:59:20 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:20.710454 | orchestrator | 2025-03-22 23:59:20 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:20.710489 | orchestrator | 2025-03-22 23:59:20 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:20.712177 | orchestrator | 2025-03-22 23:59:20 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:20.714105 | orchestrator | 2025-03-22 23:59:20 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:23.834781 | orchestrator | 2025-03-22 23:59:20 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:23.834919 | orchestrator | 2025-03-22 23:59:23 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:23.835013 | orchestrator | 2025-03-22 23:59:23 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:23.835408 | orchestrator | 2025-03-22 23:59:23 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:23.837499 | orchestrator | 2025-03-22 23:59:23 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:23.838482 | orchestrator | 2025-03-22 23:59:23 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:26.884369 | orchestrator | 2025-03-22 23:59:23 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:26.884490 | orchestrator | 2025-03-22 23:59:26 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:26.884566 | orchestrator | 2025-03-22 23:59:26 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:26.885227 | orchestrator | 2025-03-22 23:59:26 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:26.885788 | orchestrator | 2025-03-22 23:59:26 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:26.886637 | orchestrator | 2025-03-22 23:59:26 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:29.931155 | orchestrator | 2025-03-22 23:59:26 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:29.931295 | orchestrator | 2025-03-22 23:59:29 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:29.931880 | orchestrator | 2025-03-22 23:59:29 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:29.931910 | orchestrator | 2025-03-22 23:59:29 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:29.931933 | orchestrator | 2025-03-22 23:59:29 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:29.934876 | orchestrator | 2025-03-22 23:59:29 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:29.938881 | orchestrator | 2025-03-22 23:59:29 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:32.982332 | orchestrator | 2025-03-22 23:59:32 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:32.982795 | orchestrator | 2025-03-22 23:59:32 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:32.986504 | orchestrator | 2025-03-22 23:59:32 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:32.994690 | orchestrator | 2025-03-22 23:59:32 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:36.054737 | orchestrator | 2025-03-22 23:59:32 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:36.054820 | orchestrator | 2025-03-22 23:59:32 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:36.054840 | orchestrator | 2025-03-22 23:59:36 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:39.142986 | orchestrator | 2025-03-22 23:59:36 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:39.143107 | orchestrator | 2025-03-22 23:59:36 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:39.143128 | orchestrator | 2025-03-22 23:59:36 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:39.143143 | orchestrator | 2025-03-22 23:59:36 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:39.143203 | orchestrator | 2025-03-22 23:59:36 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:39.143240 | orchestrator | 2025-03-22 23:59:39 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:39.143328 | orchestrator | 2025-03-22 23:59:39 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:39.146829 | orchestrator | 2025-03-22 23:59:39 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state STARTED 2025-03-22 23:59:39.148767 | orchestrator | 2025-03-22 23:59:39 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:39.148799 | orchestrator | 2025-03-22 23:59:39 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:42.200814 | orchestrator | 2025-03-22 23:59:39 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:42.200952 | orchestrator | 2025-03-22 23:59:42 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:42.201076 | orchestrator | 2025-03-22 23:59:42 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:42.201387 | orchestrator | 2025-03-22 23:59:42 | INFO  | Task 9d714bd6-1c02-4e18-9575-772de36bbde0 is in state SUCCESS 2025-03-22 23:59:42.203290 | orchestrator | 2025-03-22 23:59:42.203340 | orchestrator | 2025-03-22 23:59:42.203355 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-03-22 23:59:42.203371 | orchestrator | 2025-03-22 23:59:42.203386 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-03-22 23:59:42.203401 | orchestrator | Saturday 22 March 2025 23:58:19 +0000 (0:00:00.539) 0:00:00.539 ******** 2025-03-22 23:59:42.203415 | orchestrator | ok: [testbed-manager] => { 2025-03-22 23:59:42.203431 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-03-22 23:59:42.203447 | orchestrator | } 2025-03-22 23:59:42.203461 | orchestrator | 2025-03-22 23:59:42.203475 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-03-22 23:59:42.203489 | orchestrator | Saturday 22 March 2025 23:58:19 +0000 (0:00:00.299) 0:00:00.839 ******** 2025-03-22 23:59:42.203503 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.203518 | orchestrator | 2025-03-22 23:59:42.203532 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-03-22 23:59:42.203546 | orchestrator | Saturday 22 March 2025 23:58:20 +0000 (0:00:01.323) 0:00:02.162 ******** 2025-03-22 23:59:42.203580 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-03-22 23:59:42.203640 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-03-22 23:59:42.203657 | orchestrator | 2025-03-22 23:59:42.203670 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-03-22 23:59:42.203685 | orchestrator | Saturday 22 March 2025 23:58:21 +0000 (0:00:01.231) 0:00:03.393 ******** 2025-03-22 23:59:42.203698 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.203712 | orchestrator | 2025-03-22 23:59:42.203726 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-03-22 23:59:42.203740 | orchestrator | Saturday 22 March 2025 23:58:23 +0000 (0:00:01.989) 0:00:05.382 ******** 2025-03-22 23:59:42.203754 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.203768 | orchestrator | 2025-03-22 23:59:42.203782 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-03-22 23:59:42.203795 | orchestrator | Saturday 22 March 2025 23:58:25 +0000 (0:00:01.943) 0:00:07.326 ******** 2025-03-22 23:59:42.203809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-03-22 23:59:42.203823 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.203837 | orchestrator | 2025-03-22 23:59:42.203851 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-03-22 23:59:42.203865 | orchestrator | Saturday 22 March 2025 23:58:50 +0000 (0:00:24.627) 0:00:31.954 ******** 2025-03-22 23:59:42.203882 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.203898 | orchestrator | 2025-03-22 23:59:42.203920 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 23:59:42.203936 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.203953 | orchestrator | 2025-03-22 23:59:42.203968 | orchestrator | 2025-03-22 23:59:42.203983 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 23:59:42.203999 | orchestrator | Saturday 22 March 2025 23:58:53 +0000 (0:00:02.649) 0:00:34.604 ******** 2025-03-22 23:59:42.204014 | orchestrator | =============================================================================== 2025-03-22 23:59:42.204030 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.63s 2025-03-22 23:59:42.204045 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.65s 2025-03-22 23:59:42.204060 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.99s 2025-03-22 23:59:42.204075 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.94s 2025-03-22 23:59:42.204091 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.32s 2025-03-22 23:59:42.204105 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.23s 2025-03-22 23:59:42.204121 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.30s 2025-03-22 23:59:42.204137 | orchestrator | 2025-03-22 23:59:42.204152 | orchestrator | 2025-03-22 23:59:42.204168 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-03-22 23:59:42.204183 | orchestrator | 2025-03-22 23:59:42.204199 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-03-22 23:59:42.204214 | orchestrator | Saturday 22 March 2025 23:58:20 +0000 (0:00:00.518) 0:00:00.518 ******** 2025-03-22 23:59:42.204229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-03-22 23:59:42.204244 | orchestrator | 2025-03-22 23:59:42.204258 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-03-22 23:59:42.204272 | orchestrator | Saturday 22 March 2025 23:58:21 +0000 (0:00:00.518) 0:00:01.036 ******** 2025-03-22 23:59:42.204286 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-03-22 23:59:42.204308 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-03-22 23:59:42.204322 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-03-22 23:59:42.204336 | orchestrator | 2025-03-22 23:59:42.204350 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-03-22 23:59:42.204363 | orchestrator | Saturday 22 March 2025 23:58:22 +0000 (0:00:01.498) 0:00:02.535 ******** 2025-03-22 23:59:42.204377 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.204391 | orchestrator | 2025-03-22 23:59:42.204404 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-03-22 23:59:42.204418 | orchestrator | Saturday 22 March 2025 23:58:24 +0000 (0:00:01.242) 0:00:03.778 ******** 2025-03-22 23:59:42.204443 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-03-22 23:59:42.204458 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.204472 | orchestrator | 2025-03-22 23:59:42.204486 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-03-22 23:59:42.204500 | orchestrator | Saturday 22 March 2025 23:58:58 +0000 (0:00:34.762) 0:00:38.540 ******** 2025-03-22 23:59:42.204513 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.204527 | orchestrator | 2025-03-22 23:59:42.204541 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-03-22 23:59:42.204555 | orchestrator | Saturday 22 March 2025 23:59:00 +0000 (0:00:01.693) 0:00:40.233 ******** 2025-03-22 23:59:42.204568 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.204582 | orchestrator | 2025-03-22 23:59:42.204623 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-03-22 23:59:42.204639 | orchestrator | Saturday 22 March 2025 23:59:01 +0000 (0:00:01.203) 0:00:41.437 ******** 2025-03-22 23:59:42.204654 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.204667 | orchestrator | 2025-03-22 23:59:42.204681 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-03-22 23:59:42.204695 | orchestrator | Saturday 22 March 2025 23:59:05 +0000 (0:00:03.545) 0:00:44.982 ******** 2025-03-22 23:59:42.204708 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.204722 | orchestrator | 2025-03-22 23:59:42.204736 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-03-22 23:59:42.204750 | orchestrator | Saturday 22 March 2025 23:59:06 +0000 (0:00:01.484) 0:00:46.467 ******** 2025-03-22 23:59:42.204763 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.204777 | orchestrator | 2025-03-22 23:59:42.204791 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-03-22 23:59:42.204810 | orchestrator | Saturday 22 March 2025 23:59:07 +0000 (0:00:01.136) 0:00:47.604 ******** 2025-03-22 23:59:42.204825 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.204839 | orchestrator | 2025-03-22 23:59:42.204852 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 23:59:42.204866 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.204880 | orchestrator | 2025-03-22 23:59:42.204893 | orchestrator | 2025-03-22 23:59:42.204907 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 23:59:42.204921 | orchestrator | Saturday 22 March 2025 23:59:08 +0000 (0:00:00.615) 0:00:48.219 ******** 2025-03-22 23:59:42.204935 | orchestrator | =============================================================================== 2025-03-22 23:59:42.204948 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.76s 2025-03-22 23:59:42.204962 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.55s 2025-03-22 23:59:42.204976 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.69s 2025-03-22 23:59:42.204989 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.50s 2025-03-22 23:59:42.205003 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.48s 2025-03-22 23:59:42.205024 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.24s 2025-03-22 23:59:42.205038 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.20s 2025-03-22 23:59:42.205052 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.14s 2025-03-22 23:59:42.205066 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.62s 2025-03-22 23:59:42.205080 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.52s 2025-03-22 23:59:42.205093 | orchestrator | 2025-03-22 23:59:42.205107 | orchestrator | 2025-03-22 23:59:42.205121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-22 23:59:42.205134 | orchestrator | 2025-03-22 23:59:42.205148 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-22 23:59:42.205162 | orchestrator | Saturday 22 March 2025 23:58:21 +0000 (0:00:00.272) 0:00:00.272 ******** 2025-03-22 23:59:42.205176 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-03-22 23:59:42.205189 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-03-22 23:59:42.205203 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-03-22 23:59:42.205217 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-03-22 23:59:42.205230 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-03-22 23:59:42.205244 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-03-22 23:59:42.205258 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-03-22 23:59:42.205272 | orchestrator | 2025-03-22 23:59:42.205286 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-03-22 23:59:42.205299 | orchestrator | 2025-03-22 23:59:42.205313 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-03-22 23:59:42.205326 | orchestrator | Saturday 22 March 2025 23:58:23 +0000 (0:00:01.787) 0:00:02.059 ******** 2025-03-22 23:59:42.205354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 23:59:42.205371 | orchestrator | 2025-03-22 23:59:42.205385 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-03-22 23:59:42.205399 | orchestrator | Saturday 22 March 2025 23:58:25 +0000 (0:00:02.617) 0:00:04.677 ******** 2025-03-22 23:59:42.205412 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.205426 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:59:42.205440 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:59:42.205454 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:59:42.205468 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:59:42.205487 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:59:42.205502 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:59:42.205515 | orchestrator | 2025-03-22 23:59:42.205529 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-03-22 23:59:42.205543 | orchestrator | Saturday 22 March 2025 23:58:27 +0000 (0:00:01.730) 0:00:06.407 ******** 2025-03-22 23:59:42.205557 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:59:42.205570 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.205584 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:59:42.205655 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:59:42.205671 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:59:42.205685 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:59:42.205699 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:59:42.205713 | orchestrator | 2025-03-22 23:59:42.205727 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-03-22 23:59:42.205742 | orchestrator | Saturday 22 March 2025 23:58:30 +0000 (0:00:03.136) 0:00:09.544 ******** 2025-03-22 23:59:42.205756 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.205770 | orchestrator | changed: [testbed-node-0] 2025-03-22 23:59:42.205784 | orchestrator | changed: [testbed-node-1] 2025-03-22 23:59:42.205816 | orchestrator | changed: [testbed-node-3] 2025-03-22 23:59:42.205830 | orchestrator | changed: [testbed-node-2] 2025-03-22 23:59:42.205844 | orchestrator | changed: [testbed-node-4] 2025-03-22 23:59:42.205858 | orchestrator | changed: [testbed-node-5] 2025-03-22 23:59:42.205872 | orchestrator | 2025-03-22 23:59:42.205886 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-03-22 23:59:42.205901 | orchestrator | Saturday 22 March 2025 23:58:32 +0000 (0:00:02.165) 0:00:11.710 ******** 2025-03-22 23:59:42.205915 | orchestrator | changed: [testbed-node-1] 2025-03-22 23:59:42.205927 | orchestrator | changed: [testbed-node-4] 2025-03-22 23:59:42.205939 | orchestrator | changed: [testbed-node-0] 2025-03-22 23:59:42.205952 | orchestrator | changed: [testbed-node-5] 2025-03-22 23:59:42.205964 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.205976 | orchestrator | changed: [testbed-node-3] 2025-03-22 23:59:42.205989 | orchestrator | changed: [testbed-node-2] 2025-03-22 23:59:42.206001 | orchestrator | 2025-03-22 23:59:42.206014 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-03-22 23:59:42.206097 | orchestrator | Saturday 22 March 2025 23:58:46 +0000 (0:00:13.414) 0:00:25.125 ******** 2025-03-22 23:59:42.206110 | orchestrator | changed: [testbed-node-1] 2025-03-22 23:59:42.206122 | orchestrator | changed: [testbed-node-0] 2025-03-22 23:59:42.206135 | orchestrator | changed: [testbed-node-4] 2025-03-22 23:59:42.206147 | orchestrator | changed: [testbed-node-3] 2025-03-22 23:59:42.206160 | orchestrator | changed: [testbed-node-2] 2025-03-22 23:59:42.206172 | orchestrator | changed: [testbed-node-5] 2025-03-22 23:59:42.206185 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.206197 | orchestrator | 2025-03-22 23:59:42.206215 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-03-22 23:59:42.206228 | orchestrator | Saturday 22 March 2025 23:59:04 +0000 (0:00:18.246) 0:00:43.371 ******** 2025-03-22 23:59:42.206241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 23:59:42.206258 | orchestrator | 2025-03-22 23:59:42.206270 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-03-22 23:59:42.206283 | orchestrator | Saturday 22 March 2025 23:59:09 +0000 (0:00:05.428) 0:00:48.799 ******** 2025-03-22 23:59:42.206295 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-03-22 23:59:42.206308 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-03-22 23:59:42.206321 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-03-22 23:59:42.206333 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-03-22 23:59:42.206346 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-03-22 23:59:42.206358 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-03-22 23:59:42.206370 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-03-22 23:59:42.206383 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-03-22 23:59:42.206395 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-03-22 23:59:42.206407 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-03-22 23:59:42.206420 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-03-22 23:59:42.206432 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-03-22 23:59:42.206444 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-03-22 23:59:42.206457 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-03-22 23:59:42.206470 | orchestrator | 2025-03-22 23:59:42.206482 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-03-22 23:59:42.206495 | orchestrator | Saturday 22 March 2025 23:59:18 +0000 (0:00:08.683) 0:00:57.482 ******** 2025-03-22 23:59:42.206511 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.206524 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:59:42.206543 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:59:42.206556 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:59:42.206568 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:59:42.206581 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:59:42.206611 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:59:42.206634 | orchestrator | 2025-03-22 23:59:42.206656 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-03-22 23:59:42.206671 | orchestrator | Saturday 22 March 2025 23:59:20 +0000 (0:00:02.170) 0:00:59.653 ******** 2025-03-22 23:59:42.206684 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.206696 | orchestrator | changed: [testbed-node-0] 2025-03-22 23:59:42.206708 | orchestrator | changed: [testbed-node-1] 2025-03-22 23:59:42.206721 | orchestrator | changed: [testbed-node-2] 2025-03-22 23:59:42.206733 | orchestrator | changed: [testbed-node-3] 2025-03-22 23:59:42.206745 | orchestrator | changed: [testbed-node-4] 2025-03-22 23:59:42.206758 | orchestrator | changed: [testbed-node-5] 2025-03-22 23:59:42.206770 | orchestrator | 2025-03-22 23:59:42.206782 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-03-22 23:59:42.206804 | orchestrator | Saturday 22 March 2025 23:59:24 +0000 (0:00:03.435) 0:01:03.088 ******** 2025-03-22 23:59:42.206817 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:59:42.206829 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.206842 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:59:42.206854 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:59:42.206866 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:59:42.206879 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:59:42.206891 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:59:42.206904 | orchestrator | 2025-03-22 23:59:42.206916 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-03-22 23:59:42.206929 | orchestrator | Saturday 22 March 2025 23:59:26 +0000 (0:00:02.518) 0:01:05.607 ******** 2025-03-22 23:59:42.206941 | orchestrator | ok: [testbed-node-3] 2025-03-22 23:59:42.206953 | orchestrator | ok: [testbed-node-1] 2025-03-22 23:59:42.206966 | orchestrator | ok: [testbed-manager] 2025-03-22 23:59:42.206978 | orchestrator | ok: [testbed-node-0] 2025-03-22 23:59:42.206990 | orchestrator | ok: [testbed-node-2] 2025-03-22 23:59:42.207002 | orchestrator | ok: [testbed-node-4] 2025-03-22 23:59:42.207015 | orchestrator | ok: [testbed-node-5] 2025-03-22 23:59:42.207027 | orchestrator | 2025-03-22 23:59:42.207039 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-03-22 23:59:42.207052 | orchestrator | Saturday 22 March 2025 23:59:30 +0000 (0:00:03.869) 0:01:09.477 ******** 2025-03-22 23:59:42.207064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-03-22 23:59:42.207078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-22 23:59:42.207092 | orchestrator | 2025-03-22 23:59:42.207104 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-03-22 23:59:42.207116 | orchestrator | Saturday 22 March 2025 23:59:32 +0000 (0:00:01.471) 0:01:10.948 ******** 2025-03-22 23:59:42.207129 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.207141 | orchestrator | 2025-03-22 23:59:42.207153 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-03-22 23:59:42.207166 | orchestrator | Saturday 22 March 2025 23:59:35 +0000 (0:00:03.123) 0:01:14.072 ******** 2025-03-22 23:59:42.207178 | orchestrator | changed: [testbed-node-0] 2025-03-22 23:59:42.207191 | orchestrator | changed: [testbed-manager] 2025-03-22 23:59:42.207203 | orchestrator | changed: [testbed-node-1] 2025-03-22 23:59:42.207215 | orchestrator | changed: [testbed-node-2] 2025-03-22 23:59:42.207229 | orchestrator | changed: [testbed-node-3] 2025-03-22 23:59:42.207248 | orchestrator | changed: [testbed-node-5] 2025-03-22 23:59:42.207262 | orchestrator | changed: [testbed-node-4] 2025-03-22 23:59:42.207281 | orchestrator | 2025-03-22 23:59:42.207294 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-22 23:59:42.207306 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207319 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207332 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207349 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207362 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207374 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207386 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-22 23:59:42.207398 | orchestrator | 2025-03-22 23:59:42.207411 | orchestrator | 2025-03-22 23:59:42.207424 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-22 23:59:42.207436 | orchestrator | Saturday 22 March 2025 23:59:39 +0000 (0:00:04.277) 0:01:18.349 ******** 2025-03-22 23:59:42.207449 | orchestrator | =============================================================================== 2025-03-22 23:59:42.207461 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.25s 2025-03-22 23:59:42.207473 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.42s 2025-03-22 23:59:42.207486 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.68s 2025-03-22 23:59:42.207498 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 5.43s 2025-03-22 23:59:42.207510 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.28s 2025-03-22 23:59:42.207523 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.87s 2025-03-22 23:59:42.207535 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.44s 2025-03-22 23:59:42.207548 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.14s 2025-03-22 23:59:42.207560 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.12s 2025-03-22 23:59:42.207572 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.62s 2025-03-22 23:59:42.207585 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.52s 2025-03-22 23:59:42.207626 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.17s 2025-03-22 23:59:45.275012 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.17s 2025-03-22 23:59:45.275130 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2025-03-22 23:59:45.275149 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.73s 2025-03-22 23:59:45.275164 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.47s 2025-03-22 23:59:45.275179 | orchestrator | 2025-03-22 23:59:42 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:45.275195 | orchestrator | 2025-03-22 23:59:42 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:45.275209 | orchestrator | 2025-03-22 23:59:42 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:45.275240 | orchestrator | 2025-03-22 23:59:45 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:45.276556 | orchestrator | 2025-03-22 23:59:45 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state STARTED 2025-03-22 23:59:45.276584 | orchestrator | 2025-03-22 23:59:45 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:45.276643 | orchestrator | 2025-03-22 23:59:45 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:48.320441 | orchestrator | 2025-03-22 23:59:45 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:48.320584 | orchestrator | 2025-03-22 23:59:48 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:48.321923 | orchestrator | 2025-03-22 23:59:48 | INFO  | Task de692522-120e-4277-a3f1-cc74817ce6a4 is in state SUCCESS 2025-03-22 23:59:48.322782 | orchestrator | 2025-03-22 23:59:48 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:48.322834 | orchestrator | 2025-03-22 23:59:48 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:51.396081 | orchestrator | 2025-03-22 23:59:48 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:51.396220 | orchestrator | 2025-03-22 23:59:51 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:51.398962 | orchestrator | 2025-03-22 23:59:51 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:51.401012 | orchestrator | 2025-03-22 23:59:51 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:54.441756 | orchestrator | 2025-03-22 23:59:51 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:54.441883 | orchestrator | 2025-03-22 23:59:54 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:57.498188 | orchestrator | 2025-03-22 23:59:54 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:57.498299 | orchestrator | 2025-03-22 23:59:54 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-22 23:59:57.498318 | orchestrator | 2025-03-22 23:59:54 | INFO  | Wait 1 second(s) until the next check 2025-03-22 23:59:57.498350 | orchestrator | 2025-03-22 23:59:57 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-22 23:59:57.500736 | orchestrator | 2025-03-22 23:59:57 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-22 23:59:57.501052 | orchestrator | 2025-03-22 23:59:57 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:00.544443 | orchestrator | 2025-03-22 23:59:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:00.544593 | orchestrator | 2025-03-23 00:00:00 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:00.544737 | orchestrator | 2025-03-23 00:00:00 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:00.544764 | orchestrator | 2025-03-23 00:00:00 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:03.611926 | orchestrator | 2025-03-23 00:00:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:03.612104 | orchestrator | 2025-03-23 00:00:03 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:03.612988 | orchestrator | 2025-03-23 00:00:03 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:03.615339 | orchestrator | 2025-03-23 00:00:03 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:03.615474 | orchestrator | 2025-03-23 00:00:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:06.655542 | orchestrator | 2025-03-23 00:00:06 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:06.656044 | orchestrator | 2025-03-23 00:00:06 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:06.656803 | orchestrator | 2025-03-23 00:00:06 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:06.656882 | orchestrator | 2025-03-23 00:00:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:09.702881 | orchestrator | 2025-03-23 00:00:09 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:09.705743 | orchestrator | 2025-03-23 00:00:09 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:09.706260 | orchestrator | 2025-03-23 00:00:09 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:12.782129 | orchestrator | 2025-03-23 00:00:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:12.782264 | orchestrator | 2025-03-23 00:00:12 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:12.785966 | orchestrator | 2025-03-23 00:00:12 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:12.787319 | orchestrator | 2025-03-23 00:00:12 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:12.787884 | orchestrator | 2025-03-23 00:00:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:15.831892 | orchestrator | 2025-03-23 00:00:15 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:15.832102 | orchestrator | 2025-03-23 00:00:15 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:15.834793 | orchestrator | 2025-03-23 00:00:15 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:18.885808 | orchestrator | 2025-03-23 00:00:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:18.885943 | orchestrator | 2025-03-23 00:00:18 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:18.887193 | orchestrator | 2025-03-23 00:00:18 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:18.888919 | orchestrator | 2025-03-23 00:00:18 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:21.938878 | orchestrator | 2025-03-23 00:00:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:21.939017 | orchestrator | 2025-03-23 00:00:21 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:21.940481 | orchestrator | 2025-03-23 00:00:21 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:21.940976 | orchestrator | 2025-03-23 00:00:21 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:24.993461 | orchestrator | 2025-03-23 00:00:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:24.993644 | orchestrator | 2025-03-23 00:00:24 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:24.995247 | orchestrator | 2025-03-23 00:00:24 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:24.998749 | orchestrator | 2025-03-23 00:00:24 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:28.048247 | orchestrator | 2025-03-23 00:00:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:28.048407 | orchestrator | 2025-03-23 00:00:28 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:28.048646 | orchestrator | 2025-03-23 00:00:28 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:28.049325 | orchestrator | 2025-03-23 00:00:28 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:28.049540 | orchestrator | 2025-03-23 00:00:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:31.103040 | orchestrator | 2025-03-23 00:00:31 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:31.104917 | orchestrator | 2025-03-23 00:00:31 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:31.107895 | orchestrator | 2025-03-23 00:00:31 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:34.145227 | orchestrator | 2025-03-23 00:00:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:34.145347 | orchestrator | 2025-03-23 00:00:34 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:34.146273 | orchestrator | 2025-03-23 00:00:34 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:34.147644 | orchestrator | 2025-03-23 00:00:34 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:34.147716 | orchestrator | 2025-03-23 00:00:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:37.211237 | orchestrator | 2025-03-23 00:00:37 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:37.220899 | orchestrator | 2025-03-23 00:00:37 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:37.224356 | orchestrator | 2025-03-23 00:00:37 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:40.266858 | orchestrator | 2025-03-23 00:00:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:40.266982 | orchestrator | 2025-03-23 00:00:40 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:40.267861 | orchestrator | 2025-03-23 00:00:40 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state STARTED 2025-03-23 00:00:40.273712 | orchestrator | 2025-03-23 00:00:40 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:43.321537 | orchestrator | 2025-03-23 00:00:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:43.321713 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:43.322375 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:00:43.325808 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task 8e950876-4a9d-46d0-8721-1382003ea623 is in state SUCCESS 2025-03-23 00:00:43.327312 | orchestrator | 2025-03-23 00:00:43.327359 | orchestrator | 2025-03-23 00:00:43.327373 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-03-23 00:00:43.327389 | orchestrator | 2025-03-23 00:00:43.327403 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-03-23 00:00:43.327417 | orchestrator | Saturday 22 March 2025 23:58:48 +0000 (0:00:00.207) 0:00:00.207 ******** 2025-03-23 00:00:43.327431 | orchestrator | ok: [testbed-manager] 2025-03-23 00:00:43.327447 | orchestrator | 2025-03-23 00:00:43.327461 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-03-23 00:00:43.327475 | orchestrator | Saturday 22 March 2025 23:58:49 +0000 (0:00:01.068) 0:00:01.275 ******** 2025-03-23 00:00:43.327489 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-03-23 00:00:43.327523 | orchestrator | 2025-03-23 00:00:43.327538 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-03-23 00:00:43.327552 | orchestrator | Saturday 22 March 2025 23:58:50 +0000 (0:00:01.163) 0:00:02.439 ******** 2025-03-23 00:00:43.327566 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.327580 | orchestrator | 2025-03-23 00:00:43.327594 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-03-23 00:00:43.327638 | orchestrator | Saturday 22 March 2025 23:58:52 +0000 (0:00:02.140) 0:00:04.580 ******** 2025-03-23 00:00:43.327653 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-03-23 00:00:43.327668 | orchestrator | ok: [testbed-manager] 2025-03-23 00:00:43.327682 | orchestrator | 2025-03-23 00:00:43.327695 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-03-23 00:00:43.327709 | orchestrator | Saturday 22 March 2025 23:59:42 +0000 (0:00:50.149) 0:00:54.729 ******** 2025-03-23 00:00:43.327723 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.327737 | orchestrator | 2025-03-23 00:00:43.327751 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:00:43.327765 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:00:43.327781 | orchestrator | 2025-03-23 00:00:43.327795 | orchestrator | 2025-03-23 00:00:43.327809 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:00:43.327822 | orchestrator | Saturday 22 March 2025 23:59:46 +0000 (0:00:03.944) 0:00:58.674 ******** 2025-03-23 00:00:43.327836 | orchestrator | =============================================================================== 2025-03-23 00:00:43.327850 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 50.15s 2025-03-23 00:00:43.327864 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.94s 2025-03-23 00:00:43.327877 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.14s 2025-03-23 00:00:43.327892 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.16s 2025-03-23 00:00:43.327908 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.07s 2025-03-23 00:00:43.327923 | orchestrator | 2025-03-23 00:00:43.327938 | orchestrator | 2025-03-23 00:00:43.327953 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-03-23 00:00:43.327969 | orchestrator | 2025-03-23 00:00:43.327985 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-23 00:00:43.328001 | orchestrator | Saturday 22 March 2025 23:58:15 +0000 (0:00:00.363) 0:00:00.363 ******** 2025-03-23 00:00:43.328017 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 00:00:43.328033 | orchestrator | 2025-03-23 00:00:43.328050 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-03-23 00:00:43.328066 | orchestrator | Saturday 22 March 2025 23:58:17 +0000 (0:00:02.169) 0:00:02.533 ******** 2025-03-23 00:00:43.328081 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328097 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328119 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328135 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328152 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328167 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328183 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-23 00:00:43.328199 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328223 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328239 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328254 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328268 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328282 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328296 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328311 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328325 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328348 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-23 00:00:43.328363 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328377 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328391 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328405 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-23 00:00:43.328419 | orchestrator | 2025-03-23 00:00:43.328433 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-23 00:00:43.328452 | orchestrator | Saturday 22 March 2025 23:58:21 +0000 (0:00:04.383) 0:00:06.916 ******** 2025-03-23 00:00:43.328466 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 00:00:43.328486 | orchestrator | 2025-03-23 00:00:43.328500 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-03-23 00:00:43.328514 | orchestrator | Saturday 22 March 2025 23:58:23 +0000 (0:00:01.635) 0:00:08.552 ******** 2025-03-23 00:00:43.328532 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328660 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.328691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328779 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.328916 | orchestrator | 2025-03-23 00:00:43.328930 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-03-23 00:00:43.328944 | orchestrator | Saturday 22 March 2025 23:58:28 +0000 (0:00:04.730) 0:00:13.282 ******** 2025-03-23 00:00:43.328971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.328986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329030 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329115 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:00:43.329130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329179 | orchestrator | skipping: [testbed-manager] 2025-03-23 00:00:43.329193 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:00:43.329207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329250 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:00:43.329264 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:00:43.329278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329328 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:00:43.329343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329393 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:00:43.329407 | orchestrator | 2025-03-23 00:00:43.329421 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-03-23 00:00:43.329436 | orchestrator | Saturday 22 March 2025 23:58:30 +0000 (0:00:02.116) 0:00:15.399 ******** 2025-03-23 00:00:43.329450 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.329464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329487 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.329975 | orchestrator | skipping: [testbed-manager] 2025-03-23 00:00:43.330002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.330087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330132 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:00:43.330146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.330161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330190 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:00:43.330206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.330229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330264 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:00:43.330278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.330293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330322 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:00:43.330342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.330357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330404 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:00:43.330418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-23 00:00:43.330439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.330468 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:00:43.330482 | orchestrator | 2025-03-23 00:00:43.330500 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-03-23 00:00:43.330516 | orchestrator | Saturday 22 March 2025 23:58:32 +0000 (0:00:02.473) 0:00:17.873 ******** 2025-03-23 00:00:43.330532 | orchestrator | skipping: [testbed-manager] 2025-03-23 00:00:43.330548 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:00:43.330564 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:00:43.330580 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:00:43.330596 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:00:43.330633 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:00:43.330649 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:00:43.330665 | orchestrator | 2025-03-23 00:00:43.330680 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-03-23 00:00:43.330696 | orchestrator | Saturday 22 March 2025 23:58:34 +0000 (0:00:02.051) 0:00:19.924 ******** 2025-03-23 00:00:43.330712 | orchestrator | skipping: [testbed-manager] 2025-03-23 00:00:43.330728 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:00:43.330743 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:00:43.330759 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:00:43.330775 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:00:43.330790 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:00:43.330806 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:00:43.330821 | orchestrator | 2025-03-23 00:00:43.330838 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-03-23 00:00:43.330853 | orchestrator | Saturday 22 March 2025 23:58:37 +0000 (0:00:02.091) 0:00:22.016 ******** 2025-03-23 00:00:43.330867 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:00:43.330881 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.330895 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.330908 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.330922 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.330936 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.330950 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.330964 | orchestrator | 2025-03-23 00:00:43.330978 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-03-23 00:00:43.330992 | orchestrator | Saturday 22 March 2025 23:59:06 +0000 (0:00:29.910) 0:00:51.927 ******** 2025-03-23 00:00:43.331006 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:00:43.331020 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:00:43.331034 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:00:43.331047 | orchestrator | ok: [testbed-manager] 2025-03-23 00:00:43.331061 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:00:43.331083 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:00:43.331097 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:00:43.331111 | orchestrator | 2025-03-23 00:00:43.331125 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-23 00:00:43.331139 | orchestrator | Saturday 22 March 2025 23:59:10 +0000 (0:00:03.871) 0:00:55.798 ******** 2025-03-23 00:00:43.331153 | orchestrator | ok: [testbed-manager] 2025-03-23 00:00:43.331167 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:00:43.331186 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:00:43.331200 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:00:43.331214 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:00:43.331228 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:00:43.331242 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:00:43.331255 | orchestrator | 2025-03-23 00:00:43.331269 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-03-23 00:00:43.331284 | orchestrator | Saturday 22 March 2025 23:59:12 +0000 (0:00:01.755) 0:00:57.554 ******** 2025-03-23 00:00:43.331298 | orchestrator | skipping: [testbed-manager] 2025-03-23 00:00:43.331312 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:00:43.331326 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:00:43.331340 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:00:43.331360 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:00:43.331374 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:00:43.331388 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:00:43.331402 | orchestrator | 2025-03-23 00:00:43.331416 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-23 00:00:43.331430 | orchestrator | Saturday 22 March 2025 23:59:13 +0000 (0:00:01.367) 0:00:58.921 ******** 2025-03-23 00:00:43.331444 | orchestrator | skipping: [testbed-manager] 2025-03-23 00:00:43.331458 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:00:43.331472 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:00:43.331486 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:00:43.331499 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:00:43.331513 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:00:43.331527 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:00:43.331541 | orchestrator | 2025-03-23 00:00:43.331555 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-03-23 00:00:43.331569 | orchestrator | Saturday 22 March 2025 23:59:15 +0000 (0:00:01.144) 0:01:00.066 ******** 2025-03-23 00:00:43.331583 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331598 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331708 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.331832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.331964 | orchestrator | 2025-03-23 00:00:43.331978 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-03-23 00:00:43.331992 | orchestrator | Saturday 22 March 2025 23:59:21 +0000 (0:00:06.822) 0:01:06.888 ******** 2025-03-23 00:00:43.332006 | orchestrator | [WARNING]: Skipped 2025-03-23 00:00:43.332020 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-03-23 00:00:43.332034 | orchestrator | to this access issue: 2025-03-23 00:00:43.332048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-03-23 00:00:43.332062 | orchestrator | directory 2025-03-23 00:00:43.332076 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 00:00:43.332090 | orchestrator | 2025-03-23 00:00:43.332104 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-03-23 00:00:43.332119 | orchestrator | Saturday 22 March 2025 23:59:23 +0000 (0:00:01.519) 0:01:08.407 ******** 2025-03-23 00:00:43.332133 | orchestrator | [WARNING]: Skipped 2025-03-23 00:00:43.332152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-03-23 00:00:43.332167 | orchestrator | to this access issue: 2025-03-23 00:00:43.332180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-03-23 00:00:43.332194 | orchestrator | directory 2025-03-23 00:00:43.332208 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 00:00:43.332222 | orchestrator | 2025-03-23 00:00:43.332236 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-03-23 00:00:43.332250 | orchestrator | Saturday 22 March 2025 23:59:24 +0000 (0:00:01.153) 0:01:09.561 ******** 2025-03-23 00:00:43.332264 | orchestrator | [WARNING]: Skipped 2025-03-23 00:00:43.332278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-03-23 00:00:43.332292 | orchestrator | to this access issue: 2025-03-23 00:00:43.332306 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-03-23 00:00:43.332320 | orchestrator | directory 2025-03-23 00:00:43.332334 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 00:00:43.332347 | orchestrator | 2025-03-23 00:00:43.332361 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-03-23 00:00:43.332375 | orchestrator | Saturday 22 March 2025 23:59:25 +0000 (0:00:00.973) 0:01:10.534 ******** 2025-03-23 00:00:43.332395 | orchestrator | [WARNING]: Skipped 2025-03-23 00:00:43.332409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-03-23 00:00:43.332423 | orchestrator | to this access issue: 2025-03-23 00:00:43.332437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-03-23 00:00:43.332451 | orchestrator | directory 2025-03-23 00:00:43.332465 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-23 00:00:43.332479 | orchestrator | 2025-03-23 00:00:43.332493 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-03-23 00:00:43.332506 | orchestrator | Saturday 22 March 2025 23:59:26 +0000 (0:00:00.729) 0:01:11.264 ******** 2025-03-23 00:00:43.332520 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.332534 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.332548 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.332562 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.332576 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.332590 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.332656 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.332672 | orchestrator | 2025-03-23 00:00:43.332687 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-03-23 00:00:43.332701 | orchestrator | Saturday 22 March 2025 23:59:31 +0000 (0:00:05.635) 0:01:16.900 ******** 2025-03-23 00:00:43.332716 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332744 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332758 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332772 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332786 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332799 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-23 00:00:43.332813 | orchestrator | 2025-03-23 00:00:43.332827 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-03-23 00:00:43.332841 | orchestrator | Saturday 22 March 2025 23:59:36 +0000 (0:00:04.310) 0:01:21.210 ******** 2025-03-23 00:00:43.332855 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.332869 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.332883 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.332897 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.332911 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.332924 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.332938 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.332952 | orchestrator | 2025-03-23 00:00:43.332966 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-03-23 00:00:43.332980 | orchestrator | Saturday 22 March 2025 23:59:40 +0000 (0:00:03.771) 0:01:24.982 ******** 2025-03-23 00:00:43.332994 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333021 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333044 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333072 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333102 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333128 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333172 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333202 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333231 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333244 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333281 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:00:43.333308 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333321 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333333 | orchestrator | 2025-03-23 00:00:43.333346 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-03-23 00:00:43.333359 | orchestrator | Saturday 22 March 2025 23:59:43 +0000 (0:00:03.637) 0:01:28.620 ******** 2025-03-23 00:00:43.333371 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333383 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333396 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333421 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333433 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333445 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-23 00:00:43.333458 | orchestrator | 2025-03-23 00:00:43.333470 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-03-23 00:00:43.333486 | orchestrator | Saturday 22 March 2025 23:59:46 +0000 (0:00:03.211) 0:01:31.831 ******** 2025-03-23 00:00:43.333499 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333511 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333554 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333566 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333578 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-23 00:00:43.333590 | orchestrator | 2025-03-23 00:00:43.333643 | orchestrator | TASK [common : Check common containers] **************************************** 2025-03-23 00:00:43.333657 | orchestrator | Saturday 22 March 2025 23:59:49 +0000 (0:00:02.761) 0:01:34.592 ******** 2025-03-23 00:00:43.333671 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333719 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333864 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-23 00:00:43.333901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:00:43.333978 | orchestrator | 2025-03-23 00:00:43.333988 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-03-23 00:00:43.333999 | orchestrator | Saturday 22 March 2025 23:59:54 +0000 (0:00:04.897) 0:01:39.490 ******** 2025-03-23 00:00:43.334009 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.334052 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.334064 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.334074 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.334084 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.334094 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.334104 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.334114 | orchestrator | 2025-03-23 00:00:43.334125 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-03-23 00:00:43.334135 | orchestrator | Saturday 22 March 2025 23:59:56 +0000 (0:00:01.893) 0:01:41.384 ******** 2025-03-23 00:00:43.334145 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.334155 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.334165 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.334179 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.334189 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.334199 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.334208 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.334218 | orchestrator | 2025-03-23 00:00:43.334228 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334238 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:01.864) 0:01:43.248 ******** 2025-03-23 00:00:43.334248 | orchestrator | 2025-03-23 00:00:43.334259 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334269 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:00.266) 0:01:43.515 ******** 2025-03-23 00:00:43.334279 | orchestrator | 2025-03-23 00:00:43.334289 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334299 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:00.052) 0:01:43.567 ******** 2025-03-23 00:00:43.334309 | orchestrator | 2025-03-23 00:00:43.334319 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334329 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:00.054) 0:01:43.622 ******** 2025-03-23 00:00:43.334339 | orchestrator | 2025-03-23 00:00:43.334349 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334359 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:00.072) 0:01:43.694 ******** 2025-03-23 00:00:43.334369 | orchestrator | 2025-03-23 00:00:43.334380 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334390 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:00.056) 0:01:43.750 ******** 2025-03-23 00:00:43.334399 | orchestrator | 2025-03-23 00:00:43.334409 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-23 00:00:43.334419 | orchestrator | Saturday 22 March 2025 23:59:59 +0000 (0:00:00.308) 0:01:44.059 ******** 2025-03-23 00:00:43.334429 | orchestrator | 2025-03-23 00:00:43.334439 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-03-23 00:00:43.334454 | orchestrator | Saturday 22 March 2025 23:59:59 +0000 (0:00:00.078) 0:01:44.138 ******** 2025-03-23 00:00:43.334465 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.334475 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.334485 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.334495 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.334505 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.334515 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.334525 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.334535 | orchestrator | 2025-03-23 00:00:43.334545 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-03-23 00:00:43.334562 | orchestrator | Sunday 23 March 2025 00:00:09 +0000 (0:00:10.276) 0:01:54.414 ********** 2025-03-23 00:00:43.334572 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.334582 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.334592 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.334617 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.334628 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.334638 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.334648 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.334658 | orchestrator | 2025-03-23 00:00:43.334672 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-03-23 00:00:43.334683 | orchestrator | Sunday 23 March 2025 00:00:32 +0000 (0:00:23.444) 0:02:17.859 ********** 2025-03-23 00:00:43.334693 | orchestrator | ok: [testbed-manager] 2025-03-23 00:00:43.334703 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:00:43.334713 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:00:43.334723 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:00:43.334733 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:00:43.334743 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:00:43.334753 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:00:43.334763 | orchestrator | 2025-03-23 00:00:43.334773 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-03-23 00:00:43.334783 | orchestrator | Sunday 23 March 2025 00:00:34 +0000 (0:00:02.013) 0:02:19.873 ********** 2025-03-23 00:00:43.334793 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:00:43.334803 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:00:43.334813 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:00:43.334823 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:00:43.334832 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:00:43.334842 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:00:43.334852 | orchestrator | changed: [testbed-manager] 2025-03-23 00:00:43.334862 | orchestrator | 2025-03-23 00:00:43.334872 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:00:43.334883 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334894 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334904 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334914 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334925 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334935 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334945 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-23 00:00:43.334955 | orchestrator | 2025-03-23 00:00:43.334965 | orchestrator | 2025-03-23 00:00:43.334975 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:00:43.334985 | orchestrator | Sunday 23 March 2025 00:00:41 +0000 (0:00:06.428) 0:02:26.301 ********** 2025-03-23 00:00:43.334995 | orchestrator | =============================================================================== 2025-03-23 00:00:43.335005 | orchestrator | common : Ensure fluentd image is present for label check --------------- 29.91s 2025-03-23 00:00:43.335015 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.44s 2025-03-23 00:00:43.335030 | orchestrator | common : Restart fluentd container ------------------------------------- 10.28s 2025-03-23 00:00:43.335040 | orchestrator | common : Copying over config.json files for services -------------------- 6.82s 2025-03-23 00:00:43.335050 | orchestrator | common : Restart cron container ----------------------------------------- 6.43s 2025-03-23 00:00:43.335060 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.64s 2025-03-23 00:00:43.335070 | orchestrator | common : Check common containers ---------------------------------------- 4.90s 2025-03-23 00:00:43.335080 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.73s 2025-03-23 00:00:43.335090 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.38s 2025-03-23 00:00:43.335100 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.31s 2025-03-23 00:00:43.335110 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.87s 2025-03-23 00:00:43.335120 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.77s 2025-03-23 00:00:43.335130 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.64s 2025-03-23 00:00:43.335143 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.21s 2025-03-23 00:00:43.338134 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.76s 2025-03-23 00:00:43.338242 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.47s 2025-03-23 00:00:43.338262 | orchestrator | common : include_tasks -------------------------------------------------- 2.17s 2025-03-23 00:00:43.338277 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.12s 2025-03-23 00:00:43.338292 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.09s 2025-03-23 00:00:43.338306 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.05s 2025-03-23 00:00:43.338320 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:00:43.338335 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:00:43.338364 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:43.338733 | orchestrator | 2025-03-23 00:00:43 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:00:46.383988 | orchestrator | 2025-03-23 00:00:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:46.384082 | orchestrator | 2025-03-23 00:00:46 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:46.384772 | orchestrator | 2025-03-23 00:00:46 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:00:46.384789 | orchestrator | 2025-03-23 00:00:46 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:00:46.384798 | orchestrator | 2025-03-23 00:00:46 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:00:46.385510 | orchestrator | 2025-03-23 00:00:46 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:46.397221 | orchestrator | 2025-03-23 00:00:46 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:00:49.435247 | orchestrator | 2025-03-23 00:00:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:49.435384 | orchestrator | 2025-03-23 00:00:49 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:49.435703 | orchestrator | 2025-03-23 00:00:49 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:00:49.435731 | orchestrator | 2025-03-23 00:00:49 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:00:49.436822 | orchestrator | 2025-03-23 00:00:49 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:00:49.437746 | orchestrator | 2025-03-23 00:00:49 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:49.438493 | orchestrator | 2025-03-23 00:00:49 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:00:49.438784 | orchestrator | 2025-03-23 00:00:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:52.481757 | orchestrator | 2025-03-23 00:00:52 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:52.490089 | orchestrator | 2025-03-23 00:00:52 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:00:52.490120 | orchestrator | 2025-03-23 00:00:52 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:00:52.490135 | orchestrator | 2025-03-23 00:00:52 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:00:52.490153 | orchestrator | 2025-03-23 00:00:52 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:52.492193 | orchestrator | 2025-03-23 00:00:52 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:00:52.492499 | orchestrator | 2025-03-23 00:00:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:55.536383 | orchestrator | 2025-03-23 00:00:55 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:55.536839 | orchestrator | 2025-03-23 00:00:55 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:00:55.536891 | orchestrator | 2025-03-23 00:00:55 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:00:55.537410 | orchestrator | 2025-03-23 00:00:55 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:00:55.540595 | orchestrator | 2025-03-23 00:00:55 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:55.541002 | orchestrator | 2025-03-23 00:00:55 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:00:58.589651 | orchestrator | 2025-03-23 00:00:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:00:58.589817 | orchestrator | 2025-03-23 00:00:58 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:00:58.589909 | orchestrator | 2025-03-23 00:00:58 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:00:58.591071 | orchestrator | 2025-03-23 00:00:58 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:00:58.591854 | orchestrator | 2025-03-23 00:00:58 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:00:58.592558 | orchestrator | 2025-03-23 00:00:58 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:00:58.593352 | orchestrator | 2025-03-23 00:00:58 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:01.637080 | orchestrator | 2025-03-23 00:00:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:01.637212 | orchestrator | 2025-03-23 00:01:01 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:01.637693 | orchestrator | 2025-03-23 00:01:01 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:01:01.639712 | orchestrator | 2025-03-23 00:01:01 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:01.640468 | orchestrator | 2025-03-23 00:01:01 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:01.640502 | orchestrator | 2025-03-23 00:01:01 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:01.641466 | orchestrator | 2025-03-23 00:01:01 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:04.681074 | orchestrator | 2025-03-23 00:01:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:04.681195 | orchestrator | 2025-03-23 00:01:04 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:04.681701 | orchestrator | 2025-03-23 00:01:04 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:01:04.682357 | orchestrator | 2025-03-23 00:01:04 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:04.685702 | orchestrator | 2025-03-23 00:01:04 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:04.686283 | orchestrator | 2025-03-23 00:01:04 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:04.687135 | orchestrator | 2025-03-23 00:01:04 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:04.687511 | orchestrator | 2025-03-23 00:01:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:07.746703 | orchestrator | 2025-03-23 00:01:07 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:07.748118 | orchestrator | 2025-03-23 00:01:07 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state STARTED 2025-03-23 00:01:07.748734 | orchestrator | 2025-03-23 00:01:07 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:07.749756 | orchestrator | 2025-03-23 00:01:07 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:07.752020 | orchestrator | 2025-03-23 00:01:07 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:07.752641 | orchestrator | 2025-03-23 00:01:07 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:10.794736 | orchestrator | 2025-03-23 00:01:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:10.794868 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:10.797368 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task f352bb1c-f322-4596-96d0-39457e8a2e38 is in state SUCCESS 2025-03-23 00:01:10.800821 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:10.802305 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:10.804762 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:10.807910 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:10.809186 | orchestrator | 2025-03-23 00:01:10 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:10.809830 | orchestrator | 2025-03-23 00:01:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:13.924848 | orchestrator | 2025-03-23 00:01:13 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:13.927793 | orchestrator | 2025-03-23 00:01:13 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:13.929987 | orchestrator | 2025-03-23 00:01:13 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:13.933666 | orchestrator | 2025-03-23 00:01:13 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:13.936436 | orchestrator | 2025-03-23 00:01:13 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:13.939721 | orchestrator | 2025-03-23 00:01:13 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:17.011439 | orchestrator | 2025-03-23 00:01:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:17.011527 | orchestrator | 2025-03-23 00:01:17 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:17.012944 | orchestrator | 2025-03-23 00:01:17 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:17.013603 | orchestrator | 2025-03-23 00:01:17 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:17.013675 | orchestrator | 2025-03-23 00:01:17 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:17.015530 | orchestrator | 2025-03-23 00:01:17 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:17.017811 | orchestrator | 2025-03-23 00:01:17 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:20.078115 | orchestrator | 2025-03-23 00:01:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:20.078234 | orchestrator | 2025-03-23 00:01:20 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:20.079049 | orchestrator | 2025-03-23 00:01:20 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:20.079083 | orchestrator | 2025-03-23 00:01:20 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:20.080221 | orchestrator | 2025-03-23 00:01:20 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:20.083051 | orchestrator | 2025-03-23 00:01:20 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:20.083872 | orchestrator | 2025-03-23 00:01:20 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:23.162770 | orchestrator | 2025-03-23 00:01:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:23.162906 | orchestrator | 2025-03-23 00:01:23 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:23.163552 | orchestrator | 2025-03-23 00:01:23 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:23.163593 | orchestrator | 2025-03-23 00:01:23 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:23.165036 | orchestrator | 2025-03-23 00:01:23 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:23.165981 | orchestrator | 2025-03-23 00:01:23 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:23.166441 | orchestrator | 2025-03-23 00:01:23 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:23.166652 | orchestrator | 2025-03-23 00:01:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:26.233468 | orchestrator | 2025-03-23 00:01:26 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:26.236771 | orchestrator | 2025-03-23 00:01:26 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:26.239749 | orchestrator | 2025-03-23 00:01:26 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state STARTED 2025-03-23 00:01:26.242275 | orchestrator | 2025-03-23 00:01:26 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:26.246942 | orchestrator | 2025-03-23 00:01:26 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:26.252398 | orchestrator | 2025-03-23 00:01:26 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:29.304356 | orchestrator | 2025-03-23 00:01:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:29.304423 | orchestrator | 2025-03-23 00:01:29 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:29.304894 | orchestrator | 2025-03-23 00:01:29 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:29.305922 | orchestrator | 2025-03-23 00:01:29 | INFO  | Task 5323edc5-bfcb-4608-8012-48ce6cfb9720 is in state SUCCESS 2025-03-23 00:01:29.308012 | orchestrator | 2025-03-23 00:01:29.308051 | orchestrator | 2025-03-23 00:01:29.308069 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 00:01:29.308085 | orchestrator | 2025-03-23 00:01:29.308099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 00:01:29.308113 | orchestrator | Sunday 23 March 2025 00:00:47 +0000 (0:00:00.593) 0:00:00.593 ********** 2025-03-23 00:01:29.308127 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:01:29.308142 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:01:29.308156 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:01:29.308170 | orchestrator | 2025-03-23 00:01:29.308184 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 00:01:29.308207 | orchestrator | Sunday 23 March 2025 00:00:48 +0000 (0:00:00.567) 0:00:01.161 ********** 2025-03-23 00:01:29.308222 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-03-23 00:01:29.308236 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-03-23 00:01:29.308250 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-03-23 00:01:29.308264 | orchestrator | 2025-03-23 00:01:29.308277 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-03-23 00:01:29.308291 | orchestrator | 2025-03-23 00:01:29.308305 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-03-23 00:01:29.308319 | orchestrator | Sunday 23 March 2025 00:00:48 +0000 (0:00:00.446) 0:00:01.607 ********** 2025-03-23 00:01:29.308371 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:01:29.308396 | orchestrator | 2025-03-23 00:01:29.308411 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-03-23 00:01:29.308425 | orchestrator | Sunday 23 March 2025 00:00:49 +0000 (0:00:01.084) 0:00:02.692 ********** 2025-03-23 00:01:29.308439 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-23 00:01:29.308453 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-23 00:01:29.308467 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-23 00:01:29.308481 | orchestrator | 2025-03-23 00:01:29.308495 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-03-23 00:01:29.308509 | orchestrator | Sunday 23 March 2025 00:00:51 +0000 (0:00:01.492) 0:00:04.185 ********** 2025-03-23 00:01:29.308523 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-23 00:01:29.308538 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-23 00:01:29.308551 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-23 00:01:29.308565 | orchestrator | 2025-03-23 00:01:29.308579 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-03-23 00:01:29.308593 | orchestrator | Sunday 23 March 2025 00:00:54 +0000 (0:00:02.658) 0:00:06.844 ********** 2025-03-23 00:01:29.308671 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:01:29.308693 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:01:29.308708 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:01:29.308722 | orchestrator | 2025-03-23 00:01:29.308736 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-03-23 00:01:29.308773 | orchestrator | Sunday 23 March 2025 00:00:58 +0000 (0:00:04.371) 0:00:11.216 ********** 2025-03-23 00:01:29.308787 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:01:29.308801 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:01:29.308815 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:01:29.308829 | orchestrator | 2025-03-23 00:01:29.308843 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:01:29.308857 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:01:29.308873 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:01:29.308888 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:01:29.308901 | orchestrator | 2025-03-23 00:01:29.308915 | orchestrator | 2025-03-23 00:01:29.308929 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:01:29.308943 | orchestrator | Sunday 23 March 2025 00:01:07 +0000 (0:00:08.568) 0:00:19.785 ********** 2025-03-23 00:01:29.308957 | orchestrator | =============================================================================== 2025-03-23 00:01:29.308971 | orchestrator | memcached : Restart memcached container --------------------------------- 8.57s 2025-03-23 00:01:29.308985 | orchestrator | memcached : Check memcached container ----------------------------------- 4.37s 2025-03-23 00:01:29.308999 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.66s 2025-03-23 00:01:29.309014 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.49s 2025-03-23 00:01:29.309028 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.08s 2025-03-23 00:01:29.309042 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-03-23 00:01:29.309056 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-03-23 00:01:29.309070 | orchestrator | 2025-03-23 00:01:29.309084 | orchestrator | 2025-03-23 00:01:29.309097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 00:01:29.309111 | orchestrator | 2025-03-23 00:01:29.309125 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 00:01:29.309139 | orchestrator | Sunday 23 March 2025 00:00:46 +0000 (0:00:00.748) 0:00:00.748 ********** 2025-03-23 00:01:29.309154 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:01:29.309168 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:01:29.309182 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:01:29.309196 | orchestrator | 2025-03-23 00:01:29.309210 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 00:01:29.309234 | orchestrator | Sunday 23 March 2025 00:00:48 +0000 (0:00:01.164) 0:00:01.913 ********** 2025-03-23 00:01:29.309249 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-03-23 00:01:29.309263 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-03-23 00:01:29.309278 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-03-23 00:01:29.309292 | orchestrator | 2025-03-23 00:01:29.309306 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-03-23 00:01:29.309320 | orchestrator | 2025-03-23 00:01:29.309334 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-03-23 00:01:29.309348 | orchestrator | Sunday 23 March 2025 00:00:48 +0000 (0:00:00.580) 0:00:02.493 ********** 2025-03-23 00:01:29.309362 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:01:29.309386 | orchestrator | 2025-03-23 00:01:29.309430 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-03-23 00:01:29.309456 | orchestrator | Sunday 23 March 2025 00:00:50 +0000 (0:00:01.564) 0:00:04.058 ********** 2025-03-23 00:01:29.309480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309593 | orchestrator | 2025-03-23 00:01:29.309627 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-03-23 00:01:29.309642 | orchestrator | Sunday 23 March 2025 00:00:52 +0000 (0:00:02.535) 0:00:06.593 ********** 2025-03-23 00:01:29.309657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309761 | orchestrator | 2025-03-23 00:01:29.309775 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-03-23 00:01:29.309790 | orchestrator | Sunday 23 March 2025 00:00:56 +0000 (0:00:03.552) 0:00:10.146 ********** 2025-03-23 00:01:29.309804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309899 | orchestrator | 2025-03-23 00:01:29.309919 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-03-23 00:01:29.309934 | orchestrator | Sunday 23 March 2025 00:01:00 +0000 (0:00:04.426) 0:00:14.572 ********** 2025-03-23 00:01:29.309948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.309992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.310007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.310071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-23 00:01:29.310096 | orchestrator | 2025-03-23 00:01:29.310111 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-23 00:01:29.310125 | orchestrator | Sunday 23 March 2025 00:01:03 +0000 (0:00:02.907) 0:00:17.480 ********** 2025-03-23 00:01:29.310139 | orchestrator | 2025-03-23 00:01:29.310153 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-23 00:01:29.310174 | orchestrator | Sunday 23 March 2025 00:01:03 +0000 (0:00:00.155) 0:00:17.635 ********** 2025-03-23 00:01:29.310300 | orchestrator | 2025-03-23 00:01:29.310367 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-23 00:01:29.310386 | orchestrator | Sunday 23 March 2025 00:01:03 +0000 (0:00:00.099) 0:00:17.735 ********** 2025-03-23 00:01:29.310400 | orchestrator | 2025-03-23 00:01:29.310414 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-03-23 00:01:29.310428 | orchestrator | Sunday 23 March 2025 00:01:04 +0000 (0:00:00.384) 0:00:18.119 ********** 2025-03-23 00:01:29.310443 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:01:29.310458 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:01:29.310472 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:01:29.310486 | orchestrator | 2025-03-23 00:01:29.310500 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-03-23 00:01:29.310514 | orchestrator | Sunday 23 March 2025 00:01:14 +0000 (0:00:09.714) 0:00:27.834 ********** 2025-03-23 00:01:29.310528 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:01:29.310542 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:01:29.310556 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:01:29.310570 | orchestrator | 2025-03-23 00:01:29.310584 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:01:29.310599 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:01:29.310647 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:01:29.310662 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:01:29.310676 | orchestrator | 2025-03-23 00:01:29.310690 | orchestrator | 2025-03-23 00:01:29.310704 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:01:29.310718 | orchestrator | Sunday 23 March 2025 00:01:24 +0000 (0:00:10.307) 0:00:38.142 ********** 2025-03-23 00:01:29.310732 | orchestrator | =============================================================================== 2025-03-23 00:01:29.310745 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.31s 2025-03-23 00:01:29.310759 | orchestrator | redis : Restart redis container ----------------------------------------- 9.71s 2025-03-23 00:01:29.310773 | orchestrator | redis : Copying over redis config files --------------------------------- 4.43s 2025-03-23 00:01:29.310787 | orchestrator | redis : Copying over default config.json files -------------------------- 3.55s 2025-03-23 00:01:29.310801 | orchestrator | redis : Check redis containers ------------------------------------------ 2.91s 2025-03-23 00:01:29.310814 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.54s 2025-03-23 00:01:29.310828 | orchestrator | redis : include_tasks --------------------------------------------------- 1.56s 2025-03-23 00:01:29.310842 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.16s 2025-03-23 00:01:29.310856 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.64s 2025-03-23 00:01:29.310870 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-03-23 00:01:29.310907 | orchestrator | 2025-03-23 00:01:29 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:32.348329 | orchestrator | 2025-03-23 00:01:29 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:32.348497 | orchestrator | 2025-03-23 00:01:29 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:32.348530 | orchestrator | 2025-03-23 00:01:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:32.348565 | orchestrator | 2025-03-23 00:01:32 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:32.348684 | orchestrator | 2025-03-23 00:01:32 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:32.349889 | orchestrator | 2025-03-23 00:01:32 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:32.353707 | orchestrator | 2025-03-23 00:01:32 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:32.355422 | orchestrator | 2025-03-23 00:01:32 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:35.406950 | orchestrator | 2025-03-23 00:01:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:35.407077 | orchestrator | 2025-03-23 00:01:35 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:35.408523 | orchestrator | 2025-03-23 00:01:35 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:35.411071 | orchestrator | 2025-03-23 00:01:35 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:35.412752 | orchestrator | 2025-03-23 00:01:35 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:35.414402 | orchestrator | 2025-03-23 00:01:35 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:35.414712 | orchestrator | 2025-03-23 00:01:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:38.461992 | orchestrator | 2025-03-23 00:01:38 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:38.462226 | orchestrator | 2025-03-23 00:01:38 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:38.462249 | orchestrator | 2025-03-23 00:01:38 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:38.462270 | orchestrator | 2025-03-23 00:01:38 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:38.463827 | orchestrator | 2025-03-23 00:01:38 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:41.512035 | orchestrator | 2025-03-23 00:01:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:41.512155 | orchestrator | 2025-03-23 00:01:41 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:41.515216 | orchestrator | 2025-03-23 00:01:41 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:41.516764 | orchestrator | 2025-03-23 00:01:41 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:41.516797 | orchestrator | 2025-03-23 00:01:41 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:41.516817 | orchestrator | 2025-03-23 00:01:41 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:44.562758 | orchestrator | 2025-03-23 00:01:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:44.562865 | orchestrator | 2025-03-23 00:01:44 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:44.563016 | orchestrator | 2025-03-23 00:01:44 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:44.563833 | orchestrator | 2025-03-23 00:01:44 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:44.564597 | orchestrator | 2025-03-23 00:01:44 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:44.565363 | orchestrator | 2025-03-23 00:01:44 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:47.623171 | orchestrator | 2025-03-23 00:01:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:47.623255 | orchestrator | 2025-03-23 00:01:47 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:47.624723 | orchestrator | 2025-03-23 00:01:47 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:47.627799 | orchestrator | 2025-03-23 00:01:47 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:47.628059 | orchestrator | 2025-03-23 00:01:47 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:47.629341 | orchestrator | 2025-03-23 00:01:47 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:47.630967 | orchestrator | 2025-03-23 00:01:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:50.686578 | orchestrator | 2025-03-23 00:01:50 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:50.686838 | orchestrator | 2025-03-23 00:01:50 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:50.689273 | orchestrator | 2025-03-23 00:01:50 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:50.689915 | orchestrator | 2025-03-23 00:01:50 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:50.690413 | orchestrator | 2025-03-23 00:01:50 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:50.693691 | orchestrator | 2025-03-23 00:01:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:53.726597 | orchestrator | 2025-03-23 00:01:53 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:53.727254 | orchestrator | 2025-03-23 00:01:53 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:53.727288 | orchestrator | 2025-03-23 00:01:53 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:53.728421 | orchestrator | 2025-03-23 00:01:53 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:53.728464 | orchestrator | 2025-03-23 00:01:53 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:53.728518 | orchestrator | 2025-03-23 00:01:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:56.777102 | orchestrator | 2025-03-23 00:01:56 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:56.777898 | orchestrator | 2025-03-23 00:01:56 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:56.777932 | orchestrator | 2025-03-23 00:01:56 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:56.777954 | orchestrator | 2025-03-23 00:01:56 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:56.778434 | orchestrator | 2025-03-23 00:01:56 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:01:56.778773 | orchestrator | 2025-03-23 00:01:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:01:59.835534 | orchestrator | 2025-03-23 00:01:59 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:01:59.837641 | orchestrator | 2025-03-23 00:01:59 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:01:59.837951 | orchestrator | 2025-03-23 00:01:59 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:01:59.838882 | orchestrator | 2025-03-23 00:01:59 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:01:59.839336 | orchestrator | 2025-03-23 00:01:59 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:02.901255 | orchestrator | 2025-03-23 00:01:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:02.901426 | orchestrator | 2025-03-23 00:02:02 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:02.902594 | orchestrator | 2025-03-23 00:02:02 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:02.907607 | orchestrator | 2025-03-23 00:02:02 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:02.910445 | orchestrator | 2025-03-23 00:02:02 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:02.913833 | orchestrator | 2025-03-23 00:02:02 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:02.917816 | orchestrator | 2025-03-23 00:02:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:05.981590 | orchestrator | 2025-03-23 00:02:05 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:05.982423 | orchestrator | 2025-03-23 00:02:05 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:05.983158 | orchestrator | 2025-03-23 00:02:05 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:05.984561 | orchestrator | 2025-03-23 00:02:05 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:05.987022 | orchestrator | 2025-03-23 00:02:05 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:05.990693 | orchestrator | 2025-03-23 00:02:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:09.084312 | orchestrator | 2025-03-23 00:02:09 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:09.090984 | orchestrator | 2025-03-23 00:02:09 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:09.097381 | orchestrator | 2025-03-23 00:02:09 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:09.100986 | orchestrator | 2025-03-23 00:02:09 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:09.103805 | orchestrator | 2025-03-23 00:02:09 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:12.156529 | orchestrator | 2025-03-23 00:02:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:12.156673 | orchestrator | 2025-03-23 00:02:12 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:12.158649 | orchestrator | 2025-03-23 00:02:12 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:12.162824 | orchestrator | 2025-03-23 00:02:12 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:12.167898 | orchestrator | 2025-03-23 00:02:12 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:12.174690 | orchestrator | 2025-03-23 00:02:12 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:15.228220 | orchestrator | 2025-03-23 00:02:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:15.228348 | orchestrator | 2025-03-23 00:02:15 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:15.230249 | orchestrator | 2025-03-23 00:02:15 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:15.231934 | orchestrator | 2025-03-23 00:02:15 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:15.233325 | orchestrator | 2025-03-23 00:02:15 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:15.234869 | orchestrator | 2025-03-23 00:02:15 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:18.301001 | orchestrator | 2025-03-23 00:02:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:18.301144 | orchestrator | 2025-03-23 00:02:18 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:18.304184 | orchestrator | 2025-03-23 00:02:18 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:18.304237 | orchestrator | 2025-03-23 00:02:18 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:18.304265 | orchestrator | 2025-03-23 00:02:18 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:18.305023 | orchestrator | 2025-03-23 00:02:18 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:18.306742 | orchestrator | 2025-03-23 00:02:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:21.359545 | orchestrator | 2025-03-23 00:02:21 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:21.360833 | orchestrator | 2025-03-23 00:02:21 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:21.361910 | orchestrator | 2025-03-23 00:02:21 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:21.363159 | orchestrator | 2025-03-23 00:02:21 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:21.363985 | orchestrator | 2025-03-23 00:02:21 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:24.442978 | orchestrator | 2025-03-23 00:02:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:24.443107 | orchestrator | 2025-03-23 00:02:24 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:24.444820 | orchestrator | 2025-03-23 00:02:24 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:24.446558 | orchestrator | 2025-03-23 00:02:24 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:24.448838 | orchestrator | 2025-03-23 00:02:24 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:24.450580 | orchestrator | 2025-03-23 00:02:24 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:27.481487 | orchestrator | 2025-03-23 00:02:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:27.481603 | orchestrator | 2025-03-23 00:02:27 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:27.482184 | orchestrator | 2025-03-23 00:02:27 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:27.484182 | orchestrator | 2025-03-23 00:02:27 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:27.484643 | orchestrator | 2025-03-23 00:02:27 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:27.485379 | orchestrator | 2025-03-23 00:02:27 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:30.531950 | orchestrator | 2025-03-23 00:02:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:30.532056 | orchestrator | 2025-03-23 00:02:30 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:30.532390 | orchestrator | 2025-03-23 00:02:30 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:30.532994 | orchestrator | 2025-03-23 00:02:30 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:30.534813 | orchestrator | 2025-03-23 00:02:30 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:30.535332 | orchestrator | 2025-03-23 00:02:30 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state STARTED 2025-03-23 00:02:33.604391 | orchestrator | 2025-03-23 00:02:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:33.604507 | orchestrator | 2025-03-23 00:02:33 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:33.605902 | orchestrator | 2025-03-23 00:02:33 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:33.605952 | orchestrator | 2025-03-23 00:02:33 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:33.605965 | orchestrator | 2025-03-23 00:02:33 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:33.605985 | orchestrator | 2025-03-23 00:02:33 | INFO  | Task 2c9f5c51-da2e-4df0-8523-7638b73ed9b2 is in state SUCCESS 2025-03-23 00:02:33.605998 | orchestrator | 2025-03-23 00:02:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:33.606079 | orchestrator | 2025-03-23 00:02:33.606096 | orchestrator | 2025-03-23 00:02:33.606109 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 00:02:33.606123 | orchestrator | 2025-03-23 00:02:33.606135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 00:02:33.606148 | orchestrator | Sunday 23 March 2025 00:00:48 +0000 (0:00:00.612) 0:00:00.612 ********** 2025-03-23 00:02:33.606161 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:02:33.606184 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:02:33.606197 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:02:33.606210 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:02:33.606222 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:02:33.606234 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:02:33.606247 | orchestrator | 2025-03-23 00:02:33.606260 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 00:02:33.606273 | orchestrator | Sunday 23 March 2025 00:00:50 +0000 (0:00:01.489) 0:00:02.101 ********** 2025-03-23 00:02:33.606285 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 00:02:33.606298 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 00:02:33.606310 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 00:02:33.606322 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 00:02:33.606334 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 00:02:33.606348 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-23 00:02:33.606377 | orchestrator | 2025-03-23 00:02:33.606390 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-03-23 00:02:33.606402 | orchestrator | 2025-03-23 00:02:33.606414 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-03-23 00:02:33.606426 | orchestrator | Sunday 23 March 2025 00:00:51 +0000 (0:00:01.484) 0:00:03.586 ********** 2025-03-23 00:02:33.606439 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 00:02:33.606452 | orchestrator | 2025-03-23 00:02:33.606465 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-23 00:02:33.606477 | orchestrator | Sunday 23 March 2025 00:00:54 +0000 (0:00:02.615) 0:00:06.201 ********** 2025-03-23 00:02:33.606490 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-23 00:02:33.606506 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-23 00:02:33.606520 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-23 00:02:33.606533 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-23 00:02:33.606547 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-23 00:02:33.606561 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-23 00:02:33.606575 | orchestrator | 2025-03-23 00:02:33.606589 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-23 00:02:33.606602 | orchestrator | Sunday 23 March 2025 00:00:56 +0000 (0:00:02.519) 0:00:08.721 ********** 2025-03-23 00:02:33.606617 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-23 00:02:33.606670 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-23 00:02:33.606686 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-23 00:02:33.606700 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-23 00:02:33.606714 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-23 00:02:33.606728 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-23 00:02:33.606742 | orchestrator | 2025-03-23 00:02:33.606756 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-23 00:02:33.606770 | orchestrator | Sunday 23 March 2025 00:01:00 +0000 (0:00:03.272) 0:00:11.993 ********** 2025-03-23 00:02:33.606785 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-03-23 00:02:33.606799 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:02:33.606814 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-03-23 00:02:33.606828 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:02:33.606842 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-03-23 00:02:33.606856 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:02:33.606868 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-03-23 00:02:33.606880 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:02:33.606892 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-03-23 00:02:33.606904 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:02:33.606916 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-03-23 00:02:33.606929 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:02:33.606941 | orchestrator | 2025-03-23 00:02:33.606953 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-03-23 00:02:33.606966 | orchestrator | Sunday 23 March 2025 00:01:02 +0000 (0:00:02.458) 0:00:14.452 ********** 2025-03-23 00:02:33.606978 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:02:33.606990 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:02:33.607002 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:02:33.607014 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:02:33.607027 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:02:33.607039 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:02:33.607051 | orchestrator | 2025-03-23 00:02:33.607063 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-03-23 00:02:33.607082 | orchestrator | Sunday 23 March 2025 00:01:03 +0000 (0:00:00.856) 0:00:15.309 ********** 2025-03-23 00:02:33.607109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607302 | orchestrator | 2025-03-23 00:02:33.607315 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-03-23 00:02:33.607327 | orchestrator | Sunday 23 March 2025 00:01:06 +0000 (0:00:02.952) 0:00:18.262 ********** 2025-03-23 00:02:33.607340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607555 | orchestrator | 2025-03-23 00:02:33.607568 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-03-23 00:02:33.607580 | orchestrator | Sunday 23 March 2025 00:01:10 +0000 (0:00:04.386) 0:00:22.648 ********** 2025-03-23 00:02:33.607593 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:02:33.607605 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:02:33.607638 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:02:33.607654 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:02:33.607666 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:02:33.607678 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:02:33.607690 | orchestrator | 2025-03-23 00:02:33.607703 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-03-23 00:02:33.607716 | orchestrator | Sunday 23 March 2025 00:01:15 +0000 (0:00:05.106) 0:00:27.755 ********** 2025-03-23 00:02:33.607728 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:02:33.607741 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:02:33.607753 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:02:33.607765 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:02:33.607778 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:02:33.607790 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:02:33.607803 | orchestrator | 2025-03-23 00:02:33.607815 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-03-23 00:02:33.607827 | orchestrator | Sunday 23 March 2025 00:01:21 +0000 (0:00:05.309) 0:00:33.065 ********** 2025-03-23 00:02:33.607840 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:02:33.607852 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:02:33.607864 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:02:33.607877 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:02:33.607889 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:02:33.607902 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:02:33.607914 | orchestrator | 2025-03-23 00:02:33.607931 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-03-23 00:02:33.607944 | orchestrator | Sunday 23 March 2025 00:01:25 +0000 (0:00:03.981) 0:00:37.046 ********** 2025-03-23 00:02:33.607957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.607989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-23 00:02:33.608174 | orchestrator | 2025-03-23 00:02:33.608187 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 00:02:33.608199 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:04.291) 0:00:41.337 ********** 2025-03-23 00:02:33.608212 | orchestrator | 2025-03-23 00:02:33.608225 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 00:02:33.608237 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:00.128) 0:00:41.466 ********** 2025-03-23 00:02:33.608249 | orchestrator | 2025-03-23 00:02:33.608262 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 00:02:33.608274 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:00.393) 0:00:41.859 ********** 2025-03-23 00:02:33.608287 | orchestrator | 2025-03-23 00:02:33.608299 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 00:02:33.608311 | orchestrator | Sunday 23 March 2025 00:01:30 +0000 (0:00:00.252) 0:00:42.112 ********** 2025-03-23 00:02:33.608324 | orchestrator | 2025-03-23 00:02:33.608336 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 00:02:33.608349 | orchestrator | Sunday 23 March 2025 00:01:30 +0000 (0:00:00.662) 0:00:42.774 ********** 2025-03-23 00:02:33.608361 | orchestrator | 2025-03-23 00:02:33.608374 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-23 00:02:33.608386 | orchestrator | Sunday 23 March 2025 00:01:31 +0000 (0:00:00.252) 0:00:43.026 ********** 2025-03-23 00:02:33.608399 | orchestrator | 2025-03-23 00:02:33.608411 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-03-23 00:02:33.608423 | orchestrator | Sunday 23 March 2025 00:01:31 +0000 (0:00:00.515) 0:00:43.541 ********** 2025-03-23 00:02:33.608436 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:02:33.608448 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:02:33.608461 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:02:33.608474 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:02:33.608486 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:02:33.608498 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:02:33.608510 | orchestrator | 2025-03-23 00:02:33.608523 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-03-23 00:02:33.608535 | orchestrator | Sunday 23 March 2025 00:01:44 +0000 (0:00:12.403) 0:00:55.945 ********** 2025-03-23 00:02:33.608548 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:02:33.608560 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:02:33.608573 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:02:33.608585 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:02:33.608598 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:02:33.608610 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:02:33.608684 | orchestrator | 2025-03-23 00:02:33.608705 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-23 00:02:33.608718 | orchestrator | Sunday 23 March 2025 00:01:46 +0000 (0:00:02.488) 0:00:58.434 ********** 2025-03-23 00:02:33.608730 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:02:33.608742 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:02:33.608755 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:02:33.608767 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:02:33.608781 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:02:33.608801 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:02:33.608815 | orchestrator | 2025-03-23 00:02:33.608834 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-03-23 00:02:33.608848 | orchestrator | Sunday 23 March 2025 00:01:59 +0000 (0:00:12.516) 0:01:10.951 ********** 2025-03-23 00:02:33.608860 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-03-23 00:02:33.608873 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-03-23 00:02:33.608892 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-03-23 00:02:33.608905 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-03-23 00:02:33.608917 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-03-23 00:02:33.608929 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-03-23 00:02:33.608942 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-03-23 00:02:33.608954 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-03-23 00:02:33.608967 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-03-23 00:02:33.608979 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-03-23 00:02:33.608992 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-03-23 00:02:33.609004 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-03-23 00:02:33.609017 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 00:02:33.609029 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 00:02:33.609042 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 00:02:33.609054 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 00:02:33.609066 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 00:02:33.609079 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-23 00:02:33.609091 | orchestrator | 2025-03-23 00:02:33.609103 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-03-23 00:02:33.609116 | orchestrator | Sunday 23 March 2025 00:02:10 +0000 (0:00:11.241) 0:01:22.192 ********** 2025-03-23 00:02:33.609129 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-03-23 00:02:33.609142 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:02:33.609154 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-03-23 00:02:33.609167 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:02:33.609179 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-03-23 00:02:33.609192 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:02:33.609204 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-03-23 00:02:33.609216 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-03-23 00:02:33.609229 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-03-23 00:02:33.609241 | orchestrator | 2025-03-23 00:02:33.609253 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-03-23 00:02:33.609266 | orchestrator | Sunday 23 March 2025 00:02:13 +0000 (0:00:03.422) 0:01:25.614 ********** 2025-03-23 00:02:33.609278 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-03-23 00:02:33.609290 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:02:33.609303 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-03-23 00:02:33.609315 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:02:33.609328 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-03-23 00:02:33.609348 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:02:33.609361 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-03-23 00:02:33.609373 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-03-23 00:02:33.609385 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-03-23 00:02:33.609398 | orchestrator | 2025-03-23 00:02:33.609410 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-23 00:02:33.609422 | orchestrator | Sunday 23 March 2025 00:02:19 +0000 (0:00:05.976) 0:01:31.591 ********** 2025-03-23 00:02:33.609434 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:02:33.609446 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:02:33.609459 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:02:33.609471 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:02:33.609483 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:02:33.609496 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:02:33.609508 | orchestrator | 2025-03-23 00:02:33.609521 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:02:33.609538 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 00:02:36.657884 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 00:02:36.657999 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-23 00:02:36.658086 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 00:02:36.658104 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 00:02:36.658137 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-23 00:02:36.658152 | orchestrator | 2025-03-23 00:02:36.658166 | orchestrator | 2025-03-23 00:02:36.658181 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:02:36.658196 | orchestrator | Sunday 23 March 2025 00:02:31 +0000 (0:00:12.082) 0:01:43.673 ********** 2025-03-23 00:02:36.658210 | orchestrator | =============================================================================== 2025-03-23 00:02:36.658230 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 24.60s 2025-03-23 00:02:36.658244 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.40s 2025-03-23 00:02:36.658258 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 11.24s 2025-03-23 00:02:36.658272 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.98s 2025-03-23 00:02:36.658285 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 5.31s 2025-03-23 00:02:36.658300 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 5.11s 2025-03-23 00:02:36.658314 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.39s 2025-03-23 00:02:36.658327 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.29s 2025-03-23 00:02:36.658341 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 3.98s 2025-03-23 00:02:36.658355 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.42s 2025-03-23 00:02:36.658369 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.27s 2025-03-23 00:02:36.658382 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.95s 2025-03-23 00:02:36.658396 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.62s 2025-03-23 00:02:36.658437 | orchestrator | module-load : Load modules ---------------------------------------------- 2.52s 2025-03-23 00:02:36.658453 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.49s 2025-03-23 00:02:36.658469 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.46s 2025-03-23 00:02:36.658485 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.20s 2025-03-23 00:02:36.658501 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.49s 2025-03-23 00:02:36.658516 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.48s 2025-03-23 00:02:36.658533 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.86s 2025-03-23 00:02:36.658566 | orchestrator | 2025-03-23 00:02:36 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:36.664879 | orchestrator | 2025-03-23 00:02:36 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:36.665853 | orchestrator | 2025-03-23 00:02:36 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:36.670090 | orchestrator | 2025-03-23 00:02:36 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:39.712493 | orchestrator | 2025-03-23 00:02:36 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:39.712606 | orchestrator | 2025-03-23 00:02:36 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:39.712690 | orchestrator | 2025-03-23 00:02:39 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:39.713435 | orchestrator | 2025-03-23 00:02:39 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:39.715419 | orchestrator | 2025-03-23 00:02:39 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:39.716157 | orchestrator | 2025-03-23 00:02:39 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:39.717059 | orchestrator | 2025-03-23 00:02:39 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:39.717357 | orchestrator | 2025-03-23 00:02:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:42.765444 | orchestrator | 2025-03-23 00:02:42 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:42.768252 | orchestrator | 2025-03-23 00:02:42 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:42.771682 | orchestrator | 2025-03-23 00:02:42 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:42.775114 | orchestrator | 2025-03-23 00:02:42 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:42.778004 | orchestrator | 2025-03-23 00:02:42 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:45.826687 | orchestrator | 2025-03-23 00:02:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:45.826807 | orchestrator | 2025-03-23 00:02:45 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:45.827807 | orchestrator | 2025-03-23 00:02:45 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:45.827840 | orchestrator | 2025-03-23 00:02:45 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:45.829965 | orchestrator | 2025-03-23 00:02:45 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:45.831854 | orchestrator | 2025-03-23 00:02:45 | INFO  | Task 35c89cab-9497-4da2-bd4b-baf4fb3ee026 is in state STARTED 2025-03-23 00:02:45.834199 | orchestrator | 2025-03-23 00:02:45 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:48.897006 | orchestrator | 2025-03-23 00:02:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:48.897143 | orchestrator | 2025-03-23 00:02:48 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:51.940170 | orchestrator | 2025-03-23 00:02:48 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:51.940339 | orchestrator | 2025-03-23 00:02:48 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:51.940363 | orchestrator | 2025-03-23 00:02:48 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:51.940379 | orchestrator | 2025-03-23 00:02:48 | INFO  | Task 35c89cab-9497-4da2-bd4b-baf4fb3ee026 is in state STARTED 2025-03-23 00:02:51.940394 | orchestrator | 2025-03-23 00:02:48 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:51.940410 | orchestrator | 2025-03-23 00:02:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:51.940442 | orchestrator | 2025-03-23 00:02:51 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:51.940533 | orchestrator | 2025-03-23 00:02:51 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:51.943241 | orchestrator | 2025-03-23 00:02:51 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:51.943646 | orchestrator | 2025-03-23 00:02:51 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:51.944880 | orchestrator | 2025-03-23 00:02:51 | INFO  | Task 35c89cab-9497-4da2-bd4b-baf4fb3ee026 is in state STARTED 2025-03-23 00:02:51.945477 | orchestrator | 2025-03-23 00:02:51 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:54.977457 | orchestrator | 2025-03-23 00:02:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:54.977582 | orchestrator | 2025-03-23 00:02:54 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:54.978132 | orchestrator | 2025-03-23 00:02:54 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:54.978175 | orchestrator | 2025-03-23 00:02:54 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:54.983923 | orchestrator | 2025-03-23 00:02:54 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:54.987050 | orchestrator | 2025-03-23 00:02:54 | INFO  | Task 35c89cab-9497-4da2-bd4b-baf4fb3ee026 is in state STARTED 2025-03-23 00:02:54.987901 | orchestrator | 2025-03-23 00:02:54 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:02:58.071746 | orchestrator | 2025-03-23 00:02:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:02:58.071869 | orchestrator | 2025-03-23 00:02:58 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:02:58.076375 | orchestrator | 2025-03-23 00:02:58 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:02:58.076400 | orchestrator | 2025-03-23 00:02:58 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:02:58.076417 | orchestrator | 2025-03-23 00:02:58 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:02:58.077190 | orchestrator | 2025-03-23 00:02:58 | INFO  | Task 35c89cab-9497-4da2-bd4b-baf4fb3ee026 is in state STARTED 2025-03-23 00:02:58.078563 | orchestrator | 2025-03-23 00:02:58 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:01.114656 | orchestrator | 2025-03-23 00:02:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:01.114804 | orchestrator | 2025-03-23 00:03:01 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:01.115157 | orchestrator | 2025-03-23 00:03:01 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:01.115802 | orchestrator | 2025-03-23 00:03:01 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:01.116802 | orchestrator | 2025-03-23 00:03:01 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:01.117363 | orchestrator | 2025-03-23 00:03:01 | INFO  | Task 35c89cab-9497-4da2-bd4b-baf4fb3ee026 is in state SUCCESS 2025-03-23 00:03:01.117910 | orchestrator | 2025-03-23 00:03:01 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:01.117990 | orchestrator | 2025-03-23 00:03:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:04.208422 | orchestrator | 2025-03-23 00:03:04 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:04.209182 | orchestrator | 2025-03-23 00:03:04 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:04.210956 | orchestrator | 2025-03-23 00:03:04 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:04.212235 | orchestrator | 2025-03-23 00:03:04 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:04.213180 | orchestrator | 2025-03-23 00:03:04 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:07.263602 | orchestrator | 2025-03-23 00:03:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:07.263785 | orchestrator | 2025-03-23 00:03:07 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:07.267739 | orchestrator | 2025-03-23 00:03:07 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:07.268396 | orchestrator | 2025-03-23 00:03:07 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:07.271338 | orchestrator | 2025-03-23 00:03:07 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:07.275203 | orchestrator | 2025-03-23 00:03:07 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:10.324893 | orchestrator | 2025-03-23 00:03:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:10.325030 | orchestrator | 2025-03-23 00:03:10 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:10.326688 | orchestrator | 2025-03-23 00:03:10 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:10.328416 | orchestrator | 2025-03-23 00:03:10 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:10.329440 | orchestrator | 2025-03-23 00:03:10 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:10.330456 | orchestrator | 2025-03-23 00:03:10 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:13.366918 | orchestrator | 2025-03-23 00:03:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:13.367059 | orchestrator | 2025-03-23 00:03:13 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:13.368568 | orchestrator | 2025-03-23 00:03:13 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:13.368604 | orchestrator | 2025-03-23 00:03:13 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:13.368987 | orchestrator | 2025-03-23 00:03:13 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:13.369561 | orchestrator | 2025-03-23 00:03:13 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:16.416581 | orchestrator | 2025-03-23 00:03:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:16.416749 | orchestrator | 2025-03-23 00:03:16 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:16.418619 | orchestrator | 2025-03-23 00:03:16 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:16.419111 | orchestrator | 2025-03-23 00:03:16 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:16.420443 | orchestrator | 2025-03-23 00:03:16 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:16.422789 | orchestrator | 2025-03-23 00:03:16 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:19.475622 | orchestrator | 2025-03-23 00:03:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:19.475812 | orchestrator | 2025-03-23 00:03:19 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:19.477010 | orchestrator | 2025-03-23 00:03:19 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:19.477732 | orchestrator | 2025-03-23 00:03:19 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:19.481457 | orchestrator | 2025-03-23 00:03:19 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:19.482558 | orchestrator | 2025-03-23 00:03:19 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:19.483103 | orchestrator | 2025-03-23 00:03:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:22.528819 | orchestrator | 2025-03-23 00:03:22 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:22.529650 | orchestrator | 2025-03-23 00:03:22 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:22.529715 | orchestrator | 2025-03-23 00:03:22 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:22.530539 | orchestrator | 2025-03-23 00:03:22 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:22.532103 | orchestrator | 2025-03-23 00:03:22 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:25.584365 | orchestrator | 2025-03-23 00:03:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:25.584503 | orchestrator | 2025-03-23 00:03:25 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:25.585526 | orchestrator | 2025-03-23 00:03:25 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:25.586882 | orchestrator | 2025-03-23 00:03:25 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:25.588180 | orchestrator | 2025-03-23 00:03:25 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:25.589450 | orchestrator | 2025-03-23 00:03:25 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:25.589748 | orchestrator | 2025-03-23 00:03:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:28.656901 | orchestrator | 2025-03-23 00:03:28 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:28.658833 | orchestrator | 2025-03-23 00:03:28 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:28.661614 | orchestrator | 2025-03-23 00:03:28 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:28.662262 | orchestrator | 2025-03-23 00:03:28 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:28.664456 | orchestrator | 2025-03-23 00:03:28 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:31.715296 | orchestrator | 2025-03-23 00:03:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:31.715433 | orchestrator | 2025-03-23 00:03:31 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:31.719189 | orchestrator | 2025-03-23 00:03:31 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:31.719219 | orchestrator | 2025-03-23 00:03:31 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:31.719240 | orchestrator | 2025-03-23 00:03:31 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:31.720178 | orchestrator | 2025-03-23 00:03:31 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:34.820013 | orchestrator | 2025-03-23 00:03:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:34.820138 | orchestrator | 2025-03-23 00:03:34 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:34.822797 | orchestrator | 2025-03-23 00:03:34 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:34.823078 | orchestrator | 2025-03-23 00:03:34 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:34.825276 | orchestrator | 2025-03-23 00:03:34 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:34.827542 | orchestrator | 2025-03-23 00:03:34 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:37.872023 | orchestrator | 2025-03-23 00:03:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:37.872159 | orchestrator | 2025-03-23 00:03:37 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:37.875022 | orchestrator | 2025-03-23 00:03:37 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:37.877713 | orchestrator | 2025-03-23 00:03:37 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:37.878101 | orchestrator | 2025-03-23 00:03:37 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:37.882221 | orchestrator | 2025-03-23 00:03:37 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:40.939349 | orchestrator | 2025-03-23 00:03:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:40.939479 | orchestrator | 2025-03-23 00:03:40 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:40.940308 | orchestrator | 2025-03-23 00:03:40 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:40.941326 | orchestrator | 2025-03-23 00:03:40 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:40.942871 | orchestrator | 2025-03-23 00:03:40 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:40.945900 | orchestrator | 2025-03-23 00:03:40 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:44.015520 | orchestrator | 2025-03-23 00:03:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:44.015717 | orchestrator | 2025-03-23 00:03:44 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:44.018213 | orchestrator | 2025-03-23 00:03:44 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:44.019031 | orchestrator | 2025-03-23 00:03:44 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:44.019126 | orchestrator | 2025-03-23 00:03:44 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:44.021586 | orchestrator | 2025-03-23 00:03:44 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:47.080115 | orchestrator | 2025-03-23 00:03:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:47.080239 | orchestrator | 2025-03-23 00:03:47 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state STARTED 2025-03-23 00:03:47.084821 | orchestrator | 2025-03-23 00:03:47 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:47.087809 | orchestrator | 2025-03-23 00:03:47 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:47.087840 | orchestrator | 2025-03-23 00:03:47 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:47.090084 | orchestrator | 2025-03-23 00:03:47 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:50.121362 | orchestrator | 2025-03-23 00:03:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:50.121510 | orchestrator | 2025-03-23 00:03:50.121533 | orchestrator | None 2025-03-23 00:03:50.121549 | orchestrator | 2025-03-23 00:03:50.121564 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-03-23 00:03:50.121579 | orchestrator | 2025-03-23 00:03:50.121594 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-03-23 00:03:50.121608 | orchestrator | Saturday 22 March 2025 23:59:01 +0000 (0:00:00.418) 0:00:00.418 ******** 2025-03-23 00:03:50.121623 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:03:50.121666 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:03:50.121681 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:03:50.121695 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.121709 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.121723 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.121737 | orchestrator | 2025-03-23 00:03:50.121751 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-03-23 00:03:50.121765 | orchestrator | Saturday 22 March 2025 23:59:03 +0000 (0:00:01.685) 0:00:02.103 ******** 2025-03-23 00:03:50.121779 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.121794 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.121808 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.121822 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.121836 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.121849 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.121863 | orchestrator | 2025-03-23 00:03:50.121877 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-03-23 00:03:50.121891 | orchestrator | Saturday 22 March 2025 23:59:05 +0000 (0:00:01.922) 0:00:04.026 ******** 2025-03-23 00:03:50.121905 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.121920 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.121936 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.121952 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.121996 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.122012 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.122084 | orchestrator | 2025-03-23 00:03:50.122101 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-03-23 00:03:50.122117 | orchestrator | Saturday 22 March 2025 23:59:07 +0000 (0:00:02.241) 0:00:06.268 ******** 2025-03-23 00:03:50.122236 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:03:50.122255 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:03:50.122272 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.122288 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:03:50.122302 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.122316 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.122330 | orchestrator | 2025-03-23 00:03:50.122344 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-03-23 00:03:50.122358 | orchestrator | Saturday 22 March 2025 23:59:10 +0000 (0:00:03.104) 0:00:09.373 ******** 2025-03-23 00:03:50.122372 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:03:50.122385 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:03:50.122399 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:03:50.122413 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.122426 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.122440 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.122454 | orchestrator | 2025-03-23 00:03:50.122468 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-03-23 00:03:50.122487 | orchestrator | Saturday 22 March 2025 23:59:14 +0000 (0:00:04.353) 0:00:13.726 ******** 2025-03-23 00:03:50.122501 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:03:50.122515 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:03:50.122529 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:03:50.122542 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.122556 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.122570 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.122583 | orchestrator | 2025-03-23 00:03:50.122597 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-03-23 00:03:50.122611 | orchestrator | Saturday 22 March 2025 23:59:18 +0000 (0:00:03.523) 0:00:17.250 ******** 2025-03-23 00:03:50.122625 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.122664 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.122679 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.122693 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.122707 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.122721 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.122735 | orchestrator | 2025-03-23 00:03:50.122749 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-03-23 00:03:50.122763 | orchestrator | Saturday 22 March 2025 23:59:19 +0000 (0:00:01.452) 0:00:18.703 ******** 2025-03-23 00:03:50.122777 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.122791 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.122805 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.122818 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.122832 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.122846 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.122860 | orchestrator | 2025-03-23 00:03:50.122874 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-03-23 00:03:50.122887 | orchestrator | Saturday 22 March 2025 23:59:21 +0000 (0:00:01.146) 0:00:19.849 ******** 2025-03-23 00:03:50.122901 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 00:03:50.122915 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 00:03:50.122929 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.122943 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 00:03:50.122956 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 00:03:50.122981 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.122995 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 00:03:50.123009 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 00:03:50.123023 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.123037 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 00:03:50.123063 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 00:03:50.123078 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.123092 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 00:03:50.123106 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 00:03:50.123120 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.123134 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-23 00:03:50.123147 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-23 00:03:50.123161 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.123175 | orchestrator | 2025-03-23 00:03:50.123188 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-03-23 00:03:50.123202 | orchestrator | Saturday 22 March 2025 23:59:23 +0000 (0:00:02.321) 0:00:22.170 ******** 2025-03-23 00:03:50.123216 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.123230 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.123244 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.123258 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.123272 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.123285 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.123299 | orchestrator | 2025-03-23 00:03:50.123313 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-03-23 00:03:50.123328 | orchestrator | Saturday 22 March 2025 23:59:26 +0000 (0:00:02.703) 0:00:24.874 ******** 2025-03-23 00:03:50.123342 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:03:50.123356 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:03:50.123370 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:03:50.123384 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.123398 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.123411 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.123425 | orchestrator | 2025-03-23 00:03:50.123439 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-03-23 00:03:50.123453 | orchestrator | Saturday 22 March 2025 23:59:27 +0000 (0:00:01.730) 0:00:26.605 ******** 2025-03-23 00:03:50.123467 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:03:50.123480 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:03:50.123494 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:03:50.123508 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.123521 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.123535 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.123549 | orchestrator | 2025-03-23 00:03:50.123563 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-03-23 00:03:50.123577 | orchestrator | Saturday 22 March 2025 23:59:34 +0000 (0:00:06.913) 0:00:33.518 ******** 2025-03-23 00:03:50.123590 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.123604 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.123618 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.123647 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.123662 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.123676 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.123690 | orchestrator | 2025-03-23 00:03:50.123704 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-03-23 00:03:50.123717 | orchestrator | Saturday 22 March 2025 23:59:36 +0000 (0:00:01.682) 0:00:35.201 ******** 2025-03-23 00:03:50.123738 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.123752 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.123767 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.123781 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.123795 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.123808 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.123822 | orchestrator | 2025-03-23 00:03:50.123837 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-03-23 00:03:50.123852 | orchestrator | Saturday 22 March 2025 23:59:38 +0000 (0:00:02.178) 0:00:37.379 ******** 2025-03-23 00:03:50.123866 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.123879 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.123893 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.123907 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.123926 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.123940 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.123954 | orchestrator | 2025-03-23 00:03:50.123967 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-03-23 00:03:50.123982 | orchestrator | Saturday 22 March 2025 23:59:39 +0000 (0:00:00.687) 0:00:38.067 ******** 2025-03-23 00:03:50.123996 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-03-23 00:03:50.124015 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-03-23 00:03:50.124029 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.124043 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-03-23 00:03:50.124108 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-03-23 00:03:50.124123 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.124137 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-03-23 00:03:50.124151 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-03-23 00:03:50.124165 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.124179 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-03-23 00:03:50.124193 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-03-23 00:03:50.124207 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.124221 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-03-23 00:03:50.124235 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-03-23 00:03:50.124249 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.124263 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-03-23 00:03:50.124277 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-03-23 00:03:50.124290 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.124304 | orchestrator | 2025-03-23 00:03:50.124318 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-03-23 00:03:50.124340 | orchestrator | Saturday 22 March 2025 23:59:40 +0000 (0:00:01.627) 0:00:39.694 ******** 2025-03-23 00:03:50.124355 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.124369 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.124383 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.124396 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.124410 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.124424 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.124437 | orchestrator | 2025-03-23 00:03:50.124451 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-03-23 00:03:50.124465 | orchestrator | 2025-03-23 00:03:50.124479 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-03-23 00:03:50.124493 | orchestrator | Saturday 22 March 2025 23:59:43 +0000 (0:00:02.486) 0:00:42.181 ******** 2025-03-23 00:03:50.124507 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.124521 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.124535 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.124557 | orchestrator | 2025-03-23 00:03:50.124571 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-03-23 00:03:50.124585 | orchestrator | Saturday 22 March 2025 23:59:44 +0000 (0:00:01.271) 0:00:43.453 ******** 2025-03-23 00:03:50.124599 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.124613 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.124684 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.124700 | orchestrator | 2025-03-23 00:03:50.124714 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-03-23 00:03:50.124728 | orchestrator | Saturday 22 March 2025 23:59:46 +0000 (0:00:01.755) 0:00:45.208 ******** 2025-03-23 00:03:50.124742 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.124756 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.124770 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.124784 | orchestrator | 2025-03-23 00:03:50.124798 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-03-23 00:03:50.124811 | orchestrator | Saturday 22 March 2025 23:59:47 +0000 (0:00:01.143) 0:00:46.351 ******** 2025-03-23 00:03:50.124825 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.124839 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.124853 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.124867 | orchestrator | 2025-03-23 00:03:50.124881 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-03-23 00:03:50.124895 | orchestrator | Saturday 22 March 2025 23:59:48 +0000 (0:00:00.959) 0:00:47.311 ******** 2025-03-23 00:03:50.124908 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.124923 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.124937 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.124950 | orchestrator | 2025-03-23 00:03:50.124964 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-03-23 00:03:50.124978 | orchestrator | Saturday 22 March 2025 23:59:48 +0000 (0:00:00.379) 0:00:47.690 ******** 2025-03-23 00:03:50.124992 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:03:50.125006 | orchestrator | 2025-03-23 00:03:50.125020 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-03-23 00:03:50.125034 | orchestrator | Saturday 22 March 2025 23:59:49 +0000 (0:00:00.717) 0:00:48.408 ******** 2025-03-23 00:03:50.125048 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.125062 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.125076 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.125089 | orchestrator | 2025-03-23 00:03:50.125103 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-03-23 00:03:50.125117 | orchestrator | Saturday 22 March 2025 23:59:52 +0000 (0:00:02.664) 0:00:51.073 ******** 2025-03-23 00:03:50.125131 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.125145 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.125159 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.125173 | orchestrator | 2025-03-23 00:03:50.125187 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-03-23 00:03:50.125201 | orchestrator | Saturday 22 March 2025 23:59:53 +0000 (0:00:00.922) 0:00:51.995 ******** 2025-03-23 00:03:50.125214 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.125228 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.125267 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.125283 | orchestrator | 2025-03-23 00:03:50.125298 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-03-23 00:03:50.125311 | orchestrator | Saturday 22 March 2025 23:59:54 +0000 (0:00:00.850) 0:00:52.846 ******** 2025-03-23 00:03:50.125325 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.125339 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.125353 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.125367 | orchestrator | 2025-03-23 00:03:50.125381 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-03-23 00:03:50.125395 | orchestrator | Saturday 22 March 2025 23:59:55 +0000 (0:00:01.905) 0:00:54.751 ******** 2025-03-23 00:03:50.125416 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.125430 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.125444 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.125458 | orchestrator | 2025-03-23 00:03:50.125472 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-03-23 00:03:50.125486 | orchestrator | Saturday 22 March 2025 23:59:56 +0000 (0:00:00.343) 0:00:55.095 ******** 2025-03-23 00:03:50.125500 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.125514 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.125528 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.125542 | orchestrator | 2025-03-23 00:03:50.125556 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-03-23 00:03:50.125570 | orchestrator | Saturday 22 March 2025 23:59:56 +0000 (0:00:00.407) 0:00:55.503 ******** 2025-03-23 00:03:50.125584 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.125597 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.125611 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.125625 | orchestrator | 2025-03-23 00:03:50.125656 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-03-23 00:03:50.125670 | orchestrator | Saturday 22 March 2025 23:59:58 +0000 (0:00:02.106) 0:00:57.609 ******** 2025-03-23 00:03:50.125691 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-23 00:03:50.125707 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-23 00:03:50.125721 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-23 00:03:50.125735 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-23 00:03:50.125749 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-23 00:03:50.125763 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-23 00:03:50.125777 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-23 00:03:50.125791 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-23 00:03:50.125805 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-23 00:03:50.125819 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-23 00:03:50.125840 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-23 00:03:50.125854 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-23 00:03:50.125868 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-03-23 00:03:50.125882 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-03-23 00:03:50.125895 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-03-23 00:03:50.125909 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.125936 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.125950 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.125964 | orchestrator | 2025-03-23 00:03:50.125978 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-03-23 00:03:50.125992 | orchestrator | Sunday 23 March 2025 00:00:54 +0000 (0:00:55.864) 0:01:53.474 ********** 2025-03-23 00:03:50.126006 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.126059 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.126076 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.126091 | orchestrator | 2025-03-23 00:03:50.126110 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-03-23 00:03:50.126124 | orchestrator | Sunday 23 March 2025 00:00:55 +0000 (0:00:00.418) 0:01:53.892 ********** 2025-03-23 00:03:50.126138 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.126152 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.126167 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.126181 | orchestrator | 2025-03-23 00:03:50.126195 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-03-23 00:03:50.126209 | orchestrator | Sunday 23 March 2025 00:00:56 +0000 (0:00:01.209) 0:01:55.102 ********** 2025-03-23 00:03:50.126223 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.126237 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.126251 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.126264 | orchestrator | 2025-03-23 00:03:50.126279 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-03-23 00:03:50.126293 | orchestrator | Sunday 23 March 2025 00:00:57 +0000 (0:00:01.409) 0:01:56.511 ********** 2025-03-23 00:03:50.126307 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.126321 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.126335 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.126349 | orchestrator | 2025-03-23 00:03:50.126363 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-03-23 00:03:50.126377 | orchestrator | Sunday 23 March 2025 00:01:11 +0000 (0:00:13.876) 0:02:10.388 ********** 2025-03-23 00:03:50.126391 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.126405 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.126418 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.126432 | orchestrator | 2025-03-23 00:03:50.126447 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-03-23 00:03:50.126461 | orchestrator | Sunday 23 March 2025 00:01:12 +0000 (0:00:01.145) 0:02:11.534 ********** 2025-03-23 00:03:50.126475 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.126489 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.126503 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.126517 | orchestrator | 2025-03-23 00:03:50.126531 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-03-23 00:03:50.126545 | orchestrator | Sunday 23 March 2025 00:01:13 +0000 (0:00:01.019) 0:02:12.554 ********** 2025-03-23 00:03:50.126559 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.126573 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.126587 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.126601 | orchestrator | 2025-03-23 00:03:50.126623 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-03-23 00:03:50.126717 | orchestrator | Sunday 23 March 2025 00:01:14 +0000 (0:00:00.944) 0:02:13.499 ********** 2025-03-23 00:03:50.126731 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.126746 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.126759 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.126773 | orchestrator | 2025-03-23 00:03:50.126787 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-03-23 00:03:50.126801 | orchestrator | Sunday 23 March 2025 00:01:15 +0000 (0:00:01.240) 0:02:14.739 ********** 2025-03-23 00:03:50.126815 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.126828 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.126842 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.126865 | orchestrator | 2025-03-23 00:03:50.126878 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-03-23 00:03:50.126893 | orchestrator | Sunday 23 March 2025 00:01:16 +0000 (0:00:00.413) 0:02:15.153 ********** 2025-03-23 00:03:50.126907 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.126920 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.126934 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.126948 | orchestrator | 2025-03-23 00:03:50.126962 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-03-23 00:03:50.126976 | orchestrator | Sunday 23 March 2025 00:01:17 +0000 (0:00:00.799) 0:02:15.952 ********** 2025-03-23 00:03:50.126989 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.127003 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.127017 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.127030 | orchestrator | 2025-03-23 00:03:50.127044 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-03-23 00:03:50.127059 | orchestrator | Sunday 23 March 2025 00:01:18 +0000 (0:00:01.082) 0:02:17.035 ********** 2025-03-23 00:03:50.127072 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.127086 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.127100 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.127114 | orchestrator | 2025-03-23 00:03:50.127128 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-03-23 00:03:50.127142 | orchestrator | Sunday 23 March 2025 00:01:19 +0000 (0:00:01.419) 0:02:18.455 ********** 2025-03-23 00:03:50.127156 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:03:50.127170 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:03:50.127183 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:03:50.127197 | orchestrator | 2025-03-23 00:03:50.127211 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-03-23 00:03:50.127225 | orchestrator | Sunday 23 March 2025 00:01:20 +0000 (0:00:01.191) 0:02:19.646 ********** 2025-03-23 00:03:50.127239 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.127253 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.127266 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.127280 | orchestrator | 2025-03-23 00:03:50.127294 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-03-23 00:03:50.127308 | orchestrator | Sunday 23 March 2025 00:01:21 +0000 (0:00:00.528) 0:02:20.174 ********** 2025-03-23 00:03:50.127321 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.127335 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.127349 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.127363 | orchestrator | 2025-03-23 00:03:50.127377 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-03-23 00:03:50.127391 | orchestrator | Sunday 23 March 2025 00:01:22 +0000 (0:00:00.709) 0:02:20.884 ********** 2025-03-23 00:03:50.127405 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.127419 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.127433 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.127446 | orchestrator | 2025-03-23 00:03:50.127460 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-03-23 00:03:50.127474 | orchestrator | Sunday 23 March 2025 00:01:23 +0000 (0:00:01.930) 0:02:22.814 ********** 2025-03-23 00:03:50.127488 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.127503 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.127530 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.127545 | orchestrator | 2025-03-23 00:03:50.127560 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-03-23 00:03:50.127579 | orchestrator | Sunday 23 March 2025 00:01:24 +0000 (0:00:00.937) 0:02:23.751 ********** 2025-03-23 00:03:50.127594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-23 00:03:50.127608 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-23 00:03:50.127682 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-23 00:03:50.127751 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-23 00:03:50.127768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-23 00:03:50.127782 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-23 00:03:50.127797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-23 00:03:50.127811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-23 00:03:50.127825 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-03-23 00:03:50.127839 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-23 00:03:50.127854 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-23 00:03:50.127868 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-23 00:03:50.127896 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-03-23 00:03:50.127911 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-23 00:03:50.127925 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-23 00:03:50.127939 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-23 00:03:50.127953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-23 00:03:50.127967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-23 00:03:50.127982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-23 00:03:50.127996 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-23 00:03:50.128010 | orchestrator | 2025-03-23 00:03:50.128024 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-03-23 00:03:50.128039 | orchestrator | 2025-03-23 00:03:50.128053 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-03-23 00:03:50.128067 | orchestrator | Sunday 23 March 2025 00:01:28 +0000 (0:00:03.615) 0:02:27.366 ********** 2025-03-23 00:03:50.128081 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:03:50.128095 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:03:50.128109 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:03:50.128123 | orchestrator | 2025-03-23 00:03:50.128137 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-03-23 00:03:50.128152 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:00.592) 0:02:27.959 ********** 2025-03-23 00:03:50.128165 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:03:50.128179 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:03:50.128193 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:03:50.128206 | orchestrator | 2025-03-23 00:03:50.128219 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-03-23 00:03:50.128231 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:00.660) 0:02:28.620 ********** 2025-03-23 00:03:50.128243 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:03:50.128255 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:03:50.128268 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:03:50.128280 | orchestrator | 2025-03-23 00:03:50.128292 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-03-23 00:03:50.128305 | orchestrator | Sunday 23 March 2025 00:01:30 +0000 (0:00:00.379) 0:02:28.999 ********** 2025-03-23 00:03:50.128317 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-23 00:03:50.128337 | orchestrator | 2025-03-23 00:03:50.128349 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-03-23 00:03:50.128362 | orchestrator | Sunday 23 March 2025 00:01:30 +0000 (0:00:00.751) 0:02:29.751 ********** 2025-03-23 00:03:50.128374 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.128386 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.128399 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.128411 | orchestrator | 2025-03-23 00:03:50.128423 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-03-23 00:03:50.128436 | orchestrator | Sunday 23 March 2025 00:01:31 +0000 (0:00:00.351) 0:02:30.103 ********** 2025-03-23 00:03:50.128448 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.128460 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.128473 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.128485 | orchestrator | 2025-03-23 00:03:50.128498 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-03-23 00:03:50.128511 | orchestrator | Sunday 23 March 2025 00:01:31 +0000 (0:00:00.413) 0:02:30.517 ********** 2025-03-23 00:03:50.128523 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.128535 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.128548 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.128560 | orchestrator | 2025-03-23 00:03:50.128572 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-03-23 00:03:50.128584 | orchestrator | Sunday 23 March 2025 00:01:32 +0000 (0:00:00.314) 0:02:30.831 ********** 2025-03-23 00:03:50.128597 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:03:50.128609 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:03:50.128621 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:03:50.128676 | orchestrator | 2025-03-23 00:03:50.128690 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-03-23 00:03:50.128703 | orchestrator | Sunday 23 March 2025 00:01:34 +0000 (0:00:02.123) 0:02:32.955 ********** 2025-03-23 00:03:50.128715 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:03:50.128727 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:03:50.128740 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:03:50.128752 | orchestrator | 2025-03-23 00:03:50.128764 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-03-23 00:03:50.128777 | orchestrator | 2025-03-23 00:03:50.128789 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-03-23 00:03:50.128801 | orchestrator | Sunday 23 March 2025 00:01:43 +0000 (0:00:09.493) 0:02:42.449 ********** 2025-03-23 00:03:50.128814 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.128826 | orchestrator | 2025-03-23 00:03:50.128838 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-03-23 00:03:50.128850 | orchestrator | Sunday 23 March 2025 00:01:44 +0000 (0:00:00.507) 0:02:42.956 ********** 2025-03-23 00:03:50.128860 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.128870 | orchestrator | 2025-03-23 00:03:50.128880 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-23 00:03:50.128890 | orchestrator | Sunday 23 March 2025 00:01:44 +0000 (0:00:00.545) 0:02:43.502 ********** 2025-03-23 00:03:50.128901 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-23 00:03:50.128911 | orchestrator | 2025-03-23 00:03:50.128927 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-23 00:03:50.128942 | orchestrator | Sunday 23 March 2025 00:01:45 +0000 (0:00:00.948) 0:02:44.451 ********** 2025-03-23 00:03:50.128952 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.128963 | orchestrator | 2025-03-23 00:03:50.128973 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-03-23 00:03:50.128983 | orchestrator | Sunday 23 March 2025 00:01:46 +0000 (0:00:01.011) 0:02:45.462 ********** 2025-03-23 00:03:50.128993 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.129011 | orchestrator | 2025-03-23 00:03:50.129022 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-03-23 00:03:50.129032 | orchestrator | Sunday 23 March 2025 00:01:47 +0000 (0:00:00.747) 0:02:46.210 ********** 2025-03-23 00:03:50.129042 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-23 00:03:50.129052 | orchestrator | 2025-03-23 00:03:50.129062 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-03-23 00:03:50.129072 | orchestrator | Sunday 23 March 2025 00:01:48 +0000 (0:00:01.165) 0:02:47.375 ********** 2025-03-23 00:03:50.129082 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-23 00:03:50.129092 | orchestrator | 2025-03-23 00:03:50.129103 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-03-23 00:03:50.129113 | orchestrator | Sunday 23 March 2025 00:01:49 +0000 (0:00:00.601) 0:02:47.977 ********** 2025-03-23 00:03:50.129123 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.129133 | orchestrator | 2025-03-23 00:03:50.129143 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-03-23 00:03:50.129153 | orchestrator | Sunday 23 March 2025 00:01:49 +0000 (0:00:00.502) 0:02:48.480 ********** 2025-03-23 00:03:50.129163 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.129174 | orchestrator | 2025-03-23 00:03:50.129184 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-03-23 00:03:50.129194 | orchestrator | 2025-03-23 00:03:50.129204 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-03-23 00:03:50.129214 | orchestrator | Sunday 23 March 2025 00:01:50 +0000 (0:00:00.530) 0:02:49.010 ********** 2025-03-23 00:03:50.129224 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.129234 | orchestrator | 2025-03-23 00:03:50.129244 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-03-23 00:03:50.129254 | orchestrator | Sunday 23 March 2025 00:01:50 +0000 (0:00:00.154) 0:02:49.165 ********** 2025-03-23 00:03:50.129265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-03-23 00:03:50.129276 | orchestrator | 2025-03-23 00:03:50.129287 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-03-23 00:03:50.129297 | orchestrator | Sunday 23 March 2025 00:01:50 +0000 (0:00:00.375) 0:02:49.541 ********** 2025-03-23 00:03:50.129307 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.129317 | orchestrator | 2025-03-23 00:03:50.129327 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-03-23 00:03:50.129337 | orchestrator | Sunday 23 March 2025 00:01:51 +0000 (0:00:01.270) 0:02:50.812 ********** 2025-03-23 00:03:50.129347 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.129357 | orchestrator | 2025-03-23 00:03:50.129367 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-03-23 00:03:50.129377 | orchestrator | Sunday 23 March 2025 00:01:53 +0000 (0:00:01.455) 0:02:52.267 ********** 2025-03-23 00:03:50.129388 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.129398 | orchestrator | 2025-03-23 00:03:50.129408 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-03-23 00:03:50.129418 | orchestrator | Sunday 23 March 2025 00:01:54 +0000 (0:00:00.786) 0:02:53.054 ********** 2025-03-23 00:03:50.129429 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.129439 | orchestrator | 2025-03-23 00:03:50.129449 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-03-23 00:03:50.129459 | orchestrator | Sunday 23 March 2025 00:01:54 +0000 (0:00:00.438) 0:02:53.492 ********** 2025-03-23 00:03:50.129469 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.129479 | orchestrator | 2025-03-23 00:03:50.129489 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-03-23 00:03:50.129499 | orchestrator | Sunday 23 March 2025 00:02:02 +0000 (0:00:08.108) 0:03:01.601 ********** 2025-03-23 00:03:50.129509 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.129519 | orchestrator | 2025-03-23 00:03:50.129535 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-03-23 00:03:50.129545 | orchestrator | Sunday 23 March 2025 00:02:19 +0000 (0:00:16.754) 0:03:18.356 ********** 2025-03-23 00:03:50.129555 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.129565 | orchestrator | 2025-03-23 00:03:50.129576 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-03-23 00:03:50.129585 | orchestrator | 2025-03-23 00:03:50.129596 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-03-23 00:03:50.129610 | orchestrator | Sunday 23 March 2025 00:02:20 +0000 (0:00:00.588) 0:03:18.945 ********** 2025-03-23 00:03:50.129620 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.129643 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.129654 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.129664 | orchestrator | 2025-03-23 00:03:50.129674 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-03-23 00:03:50.129685 | orchestrator | Sunday 23 March 2025 00:02:20 +0000 (0:00:00.775) 0:03:19.721 ********** 2025-03-23 00:03:50.129695 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.129705 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.129715 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.129725 | orchestrator | 2025-03-23 00:03:50.129736 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-03-23 00:03:50.129746 | orchestrator | Sunday 23 March 2025 00:02:21 +0000 (0:00:00.461) 0:03:20.182 ********** 2025-03-23 00:03:50.129756 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:03:50.129766 | orchestrator | 2025-03-23 00:03:50.129781 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-03-23 00:03:50.129791 | orchestrator | Sunday 23 March 2025 00:02:22 +0000 (0:00:00.794) 0:03:20.976 ********** 2025-03-23 00:03:50.129801 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.129838 | orchestrator | 2025-03-23 00:03:50.129849 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-03-23 00:03:50.129859 | orchestrator | Sunday 23 March 2025 00:02:23 +0000 (0:00:00.872) 0:03:21.849 ********** 2025-03-23 00:03:50.129869 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.129879 | orchestrator | 2025-03-23 00:03:50.129889 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-03-23 00:03:50.129899 | orchestrator | Sunday 23 March 2025 00:02:23 +0000 (0:00:00.757) 0:03:22.606 ********** 2025-03-23 00:03:50.129910 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.129920 | orchestrator | 2025-03-23 00:03:50.129930 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-03-23 00:03:50.129940 | orchestrator | Sunday 23 March 2025 00:02:24 +0000 (0:00:00.795) 0:03:23.402 ********** 2025-03-23 00:03:50.129950 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.129960 | orchestrator | 2025-03-23 00:03:50.129970 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-03-23 00:03:50.129980 | orchestrator | Sunday 23 March 2025 00:02:25 +0000 (0:00:00.621) 0:03:24.024 ********** 2025-03-23 00:03:50.129990 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.130001 | orchestrator | 2025-03-23 00:03:50.130011 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-03-23 00:03:50.130057 | orchestrator | Sunday 23 March 2025 00:02:25 +0000 (0:00:00.192) 0:03:24.216 ********** 2025-03-23 00:03:50.130068 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.130083 | orchestrator | 2025-03-23 00:03:50.130094 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-03-23 00:03:50.130104 | orchestrator | Sunday 23 March 2025 00:02:25 +0000 (0:00:00.265) 0:03:24.482 ********** 2025-03-23 00:03:50.130114 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.130125 | orchestrator | 2025-03-23 00:03:50.130135 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-03-23 00:03:50.130152 | orchestrator | Sunday 23 March 2025 00:02:25 +0000 (0:00:00.280) 0:03:24.762 ********** 2025-03-23 00:03:50.130163 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.130173 | orchestrator | 2025-03-23 00:03:50.130183 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-03-23 00:03:50.130193 | orchestrator | Sunday 23 March 2025 00:02:26 +0000 (0:00:00.219) 0:03:24.982 ********** 2025-03-23 00:03:50.130203 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.130213 | orchestrator | 2025-03-23 00:03:50.130223 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-03-23 00:03:50.130233 | orchestrator | Sunday 23 March 2025 00:02:35 +0000 (0:00:09.591) 0:03:34.574 ********** 2025-03-23 00:03:50.130244 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-03-23 00:03:50.130254 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-03-23 00:03:50.130264 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-03-23 00:03:50.130275 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-03-23 00:03:50.130285 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-03-23 00:03:50.130295 | orchestrator | 2025-03-23 00:03:50.130305 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-03-23 00:03:50.130315 | orchestrator | Sunday 23 March 2025 00:03:17 +0000 (0:00:41.840) 0:04:16.415 ********** 2025-03-23 00:03:50.130325 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.130335 | orchestrator | 2025-03-23 00:03:50.130371 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-03-23 00:03:50.130384 | orchestrator | Sunday 23 March 2025 00:03:19 +0000 (0:00:02.171) 0:04:18.586 ********** 2025-03-23 00:03:50.130395 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.130405 | orchestrator | 2025-03-23 00:03:50.130415 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-03-23 00:03:50.130425 | orchestrator | Sunday 23 March 2025 00:03:20 +0000 (0:00:01.094) 0:04:19.681 ********** 2025-03-23 00:03:50.130436 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-23 00:03:50.130446 | orchestrator | 2025-03-23 00:03:50.130460 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-03-23 00:03:50.130471 | orchestrator | Sunday 23 March 2025 00:03:21 +0000 (0:00:00.966) 0:04:20.648 ********** 2025-03-23 00:03:50.130481 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.130491 | orchestrator | 2025-03-23 00:03:50.130502 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-03-23 00:03:50.130512 | orchestrator | Sunday 23 March 2025 00:03:22 +0000 (0:00:00.353) 0:04:21.001 ********** 2025-03-23 00:03:50.130522 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-03-23 00:03:50.130532 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-03-23 00:03:50.130562 | orchestrator | 2025-03-23 00:03:50.130573 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-03-23 00:03:50.130584 | orchestrator | Sunday 23 March 2025 00:03:24 +0000 (0:00:02.530) 0:04:23.531 ********** 2025-03-23 00:03:50.130594 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.130604 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.130614 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.130624 | orchestrator | 2025-03-23 00:03:50.130651 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-03-23 00:03:50.130662 | orchestrator | Sunday 23 March 2025 00:03:25 +0000 (0:00:00.500) 0:04:24.032 ********** 2025-03-23 00:03:50.130672 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.130686 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.130703 | orchestrator | [0;2025-03-23 00:03:50 | INFO  | Task fda1cac8-78c5-49db-8221-06ea700bd3f3 is in state SUCCESS 2025-03-23 00:03:50.130724 | orchestrator | 32mok: [testbed-node-2] 2025-03-23 00:03:50.130735 | orchestrator | 2025-03-23 00:03:50.130745 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-03-23 00:03:50.130755 | orchestrator | 2025-03-23 00:03:50.130765 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-03-23 00:03:50.130775 | orchestrator | Sunday 23 March 2025 00:03:26 +0000 (0:00:01.160) 0:04:25.192 ********** 2025-03-23 00:03:50.130786 | orchestrator | ok: [testbed-manager] 2025-03-23 00:03:50.130796 | orchestrator | 2025-03-23 00:03:50.130806 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-03-23 00:03:50.130816 | orchestrator | Sunday 23 March 2025 00:03:26 +0000 (0:00:00.185) 0:04:25.378 ********** 2025-03-23 00:03:50.130826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-03-23 00:03:50.130837 | orchestrator | 2025-03-23 00:03:50.130847 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-03-23 00:03:50.130857 | orchestrator | Sunday 23 March 2025 00:03:27 +0000 (0:00:00.680) 0:04:26.059 ********** 2025-03-23 00:03:50.130867 | orchestrator | changed: [testbed-manager] 2025-03-23 00:03:50.130877 | orchestrator | 2025-03-23 00:03:50.130887 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-03-23 00:03:50.130897 | orchestrator | 2025-03-23 00:03:50.130907 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-03-23 00:03:50.130917 | orchestrator | Sunday 23 March 2025 00:03:34 +0000 (0:00:06.916) 0:04:32.975 ********** 2025-03-23 00:03:50.130927 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:03:50.130937 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:03:50.130948 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:03:50.130958 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:03:50.130968 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:03:50.130978 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:03:50.130988 | orchestrator | 2025-03-23 00:03:50.130998 | orchestrator | TASK [Manage labels] *********************************************************** 2025-03-23 00:03:50.131008 | orchestrator | Sunday 23 March 2025 00:03:35 +0000 (0:00:01.122) 0:04:34.098 ********** 2025-03-23 00:03:50.131019 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-23 00:03:50.131029 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-23 00:03:50.131039 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-23 00:03:50.131049 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-23 00:03:50.131059 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-23 00:03:50.131069 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-23 00:03:50.131079 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-23 00:03:50.131089 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-23 00:03:50.131099 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-23 00:03:50.131110 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-23 00:03:50.131120 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-23 00:03:50.131130 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-23 00:03:50.131144 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-23 00:03:50.131154 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-23 00:03:50.131164 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-23 00:03:50.131180 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-23 00:03:50.131190 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-23 00:03:50.131200 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-23 00:03:50.131210 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-23 00:03:50.131220 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-23 00:03:50.131230 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-23 00:03:50.131240 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-23 00:03:50.131250 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-23 00:03:50.131260 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-23 00:03:50.131270 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-23 00:03:50.131280 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-23 00:03:50.131290 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-23 00:03:50.131304 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-23 00:03:50.131315 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-23 00:03:50.131325 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-23 00:03:50.131336 | orchestrator | 2025-03-23 00:03:50.131346 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-03-23 00:03:50.131356 | orchestrator | Sunday 23 March 2025 00:03:46 +0000 (0:00:11.112) 0:04:45.210 ********** 2025-03-23 00:03:50.131366 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.131376 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.131387 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.131397 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.131407 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.131417 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.131427 | orchestrator | 2025-03-23 00:03:50.131437 | orchestrator | TASK [Manage taints] *********************************************************** 2025-03-23 00:03:50.131447 | orchestrator | Sunday 23 March 2025 00:03:46 +0000 (0:00:00.530) 0:04:45.741 ********** 2025-03-23 00:03:50.131457 | orchestrator | skipping: [testbed-node-3] 2025-03-23 00:03:50.131467 | orchestrator | skipping: [testbed-node-4] 2025-03-23 00:03:50.131477 | orchestrator | skipping: [testbed-node-5] 2025-03-23 00:03:50.131487 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:03:50.131497 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:03:50.131507 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:03:50.131517 | orchestrator | 2025-03-23 00:03:50.131527 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:03:50.131538 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:03:50.131548 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-23 00:03:50.131558 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-03-23 00:03:50.131569 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-03-23 00:03:50.131579 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-23 00:03:50.131594 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-23 00:03:50.131605 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-23 00:03:50.131615 | orchestrator | 2025-03-23 00:03:50.131625 | orchestrator | 2025-03-23 00:03:50.131674 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:03:50.131685 | orchestrator | Sunday 23 March 2025 00:03:47 +0000 (0:00:00.647) 0:04:46.388 ********** 2025-03-23 00:03:50.131695 | orchestrator | =============================================================================== 2025-03-23 00:03:50.131705 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.86s 2025-03-23 00:03:50.131716 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.84s 2025-03-23 00:03:50.131726 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 16.75s 2025-03-23 00:03:50.131736 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 13.88s 2025-03-23 00:03:50.131746 | orchestrator | Manage labels ---------------------------------------------------------- 11.11s 2025-03-23 00:03:50.131757 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 9.59s 2025-03-23 00:03:50.131767 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.49s 2025-03-23 00:03:50.131781 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 8.11s 2025-03-23 00:03:50.131792 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 6.92s 2025-03-23 00:03:50.131802 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.91s 2025-03-23 00:03:50.131812 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 4.35s 2025-03-23 00:03:50.131822 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.62s 2025-03-23 00:03:50.131833 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.52s 2025-03-23 00:03:50.131843 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.10s 2025-03-23 00:03:50.131853 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.70s 2025-03-23 00:03:50.131863 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.66s 2025-03-23 00:03:50.131873 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.53s 2025-03-23 00:03:50.131883 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.49s 2025-03-23 00:03:50.131893 | orchestrator | k3s_prereq : Set bridge-nf-call-iptables (just to be sure) -------------- 2.32s 2025-03-23 00:03:50.131909 | orchestrator | k3s_prereq : Set SELinux to disabled state ------------------------------ 2.24s 2025-03-23 00:03:50.131993 | orchestrator | 2025-03-23 00:03:50 | INFO  | Task b38d9662-38ad-4fb6-b054-c0ce4c4276b3 is in state STARTED 2025-03-23 00:03:50.132008 | orchestrator | 2025-03-23 00:03:50 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:50.132019 | orchestrator | 2025-03-23 00:03:50 | INFO  | Task 60b979bc-7d8e-4ec6-94b8-fb206a71960c is in state STARTED 2025-03-23 00:03:50.132034 | orchestrator | 2025-03-23 00:03:50 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:50.132044 | orchestrator | 2025-03-23 00:03:50 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:50.132055 | orchestrator | 2025-03-23 00:03:50 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:50.132068 | orchestrator | 2025-03-23 00:03:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:53.161762 | orchestrator | 2025-03-23 00:03:53 | INFO  | Task b38d9662-38ad-4fb6-b054-c0ce4c4276b3 is in state STARTED 2025-03-23 00:03:53.161939 | orchestrator | 2025-03-23 00:03:53 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:53.161968 | orchestrator | 2025-03-23 00:03:53 | INFO  | Task 60b979bc-7d8e-4ec6-94b8-fb206a71960c is in state STARTED 2025-03-23 00:03:53.164023 | orchestrator | 2025-03-23 00:03:53 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:53.164539 | orchestrator | 2025-03-23 00:03:53 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:53.165305 | orchestrator | 2025-03-23 00:03:53 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:56.223437 | orchestrator | 2025-03-23 00:03:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:56.223565 | orchestrator | 2025-03-23 00:03:56 | INFO  | Task b38d9662-38ad-4fb6-b054-c0ce4c4276b3 is in state STARTED 2025-03-23 00:03:56.226062 | orchestrator | 2025-03-23 00:03:56 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:56.226093 | orchestrator | 2025-03-23 00:03:56 | INFO  | Task 60b979bc-7d8e-4ec6-94b8-fb206a71960c is in state SUCCESS 2025-03-23 00:03:56.226115 | orchestrator | 2025-03-23 00:03:56 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:56.226805 | orchestrator | 2025-03-23 00:03:56 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:56.227484 | orchestrator | 2025-03-23 00:03:56 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:03:56.227779 | orchestrator | 2025-03-23 00:03:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:03:59.283397 | orchestrator | 2025-03-23 00:03:59 | INFO  | Task b38d9662-38ad-4fb6-b054-c0ce4c4276b3 is in state STARTED 2025-03-23 00:03:59.284205 | orchestrator | 2025-03-23 00:03:59 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:03:59.285937 | orchestrator | 2025-03-23 00:03:59 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:03:59.286936 | orchestrator | 2025-03-23 00:03:59 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:03:59.289210 | orchestrator | 2025-03-23 00:03:59 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:02.340981 | orchestrator | 2025-03-23 00:03:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:02.341076 | orchestrator | 2025-03-23 00:04:02 | INFO  | Task b38d9662-38ad-4fb6-b054-c0ce4c4276b3 is in state SUCCESS 2025-03-23 00:04:02.344878 | orchestrator | 2025-03-23 00:04:02 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:02.348387 | orchestrator | 2025-03-23 00:04:02 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:02.351878 | orchestrator | 2025-03-23 00:04:02 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state STARTED 2025-03-23 00:04:02.353981 | orchestrator | 2025-03-23 00:04:02 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:05.389806 | orchestrator | 2025-03-23 00:04:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:05.389991 | orchestrator | 2025-03-23 00:04:05 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:05.390157 | orchestrator | 2025-03-23 00:04:05 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:05.390970 | orchestrator | 2025-03-23 00:04:05 | INFO  | Task 40fc698b-8f21-4ffa-bcf4-c590adcf2b3f is in state SUCCESS 2025-03-23 00:04:05.392714 | orchestrator | 2025-03-23 00:04:05.392752 | orchestrator | 2025-03-23 00:04:05.392768 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-03-23 00:04:05.392783 | orchestrator | 2025-03-23 00:04:05.392798 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-23 00:04:05.392812 | orchestrator | Sunday 23 March 2025 00:03:51 +0000 (0:00:00.177) 0:00:00.177 ********** 2025-03-23 00:04:05.392827 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-23 00:04:05.392841 | orchestrator | 2025-03-23 00:04:05.392856 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-23 00:04:05.392870 | orchestrator | Sunday 23 March 2025 00:03:53 +0000 (0:00:01.187) 0:00:01.364 ********** 2025-03-23 00:04:05.392884 | orchestrator | changed: [testbed-manager] 2025-03-23 00:04:05.392900 | orchestrator | 2025-03-23 00:04:05.392914 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-03-23 00:04:05.392929 | orchestrator | Sunday 23 March 2025 00:03:54 +0000 (0:00:01.676) 0:00:03.041 ********** 2025-03-23 00:04:05.392943 | orchestrator | changed: [testbed-manager] 2025-03-23 00:04:05.392957 | orchestrator | 2025-03-23 00:04:05.392972 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:04:05.392986 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:04:05.393002 | orchestrator | 2025-03-23 00:04:05.393016 | orchestrator | 2025-03-23 00:04:05.393031 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:04:05.393045 | orchestrator | Sunday 23 March 2025 00:03:55 +0000 (0:00:00.609) 0:00:03.651 ********** 2025-03-23 00:04:05.393060 | orchestrator | =============================================================================== 2025-03-23 00:04:05.393074 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.68s 2025-03-23 00:04:05.393088 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.19s 2025-03-23 00:04:05.393102 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.61s 2025-03-23 00:04:05.393117 | orchestrator | 2025-03-23 00:04:05.393131 | orchestrator | 2025-03-23 00:04:05.393145 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-03-23 00:04:05.393159 | orchestrator | 2025-03-23 00:04:05.393174 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-03-23 00:04:05.393188 | orchestrator | Sunday 23 March 2025 00:03:52 +0000 (0:00:00.262) 0:00:00.262 ********** 2025-03-23 00:04:05.393202 | orchestrator | ok: [testbed-manager] 2025-03-23 00:04:05.393218 | orchestrator | 2025-03-23 00:04:05.393232 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-03-23 00:04:05.393246 | orchestrator | Sunday 23 March 2025 00:03:53 +0000 (0:00:00.758) 0:00:01.021 ********** 2025-03-23 00:04:05.393260 | orchestrator | ok: [testbed-manager] 2025-03-23 00:04:05.393274 | orchestrator | 2025-03-23 00:04:05.393297 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-23 00:04:05.393312 | orchestrator | Sunday 23 March 2025 00:03:54 +0000 (0:00:01.067) 0:00:02.089 ********** 2025-03-23 00:04:05.393327 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-23 00:04:05.393341 | orchestrator | 2025-03-23 00:04:05.393355 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-23 00:04:05.393370 | orchestrator | Sunday 23 March 2025 00:03:55 +0000 (0:00:01.143) 0:00:03.232 ********** 2025-03-23 00:04:05.393384 | orchestrator | changed: [testbed-manager] 2025-03-23 00:04:05.393398 | orchestrator | 2025-03-23 00:04:05.393413 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-03-23 00:04:05.393427 | orchestrator | Sunday 23 March 2025 00:03:56 +0000 (0:00:01.470) 0:00:04.703 ********** 2025-03-23 00:04:05.393441 | orchestrator | changed: [testbed-manager] 2025-03-23 00:04:05.393470 | orchestrator | 2025-03-23 00:04:05.393485 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-03-23 00:04:05.393499 | orchestrator | Sunday 23 March 2025 00:03:57 +0000 (0:00:00.735) 0:00:05.438 ********** 2025-03-23 00:04:05.393513 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-23 00:04:05.393528 | orchestrator | 2025-03-23 00:04:05.393542 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-03-23 00:04:05.393556 | orchestrator | Sunday 23 March 2025 00:03:58 +0000 (0:00:01.300) 0:00:06.739 ********** 2025-03-23 00:04:05.393571 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-23 00:04:05.393585 | orchestrator | 2025-03-23 00:04:05.393600 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-03-23 00:04:05.393614 | orchestrator | Sunday 23 March 2025 00:03:59 +0000 (0:00:00.538) 0:00:07.277 ********** 2025-03-23 00:04:05.393628 | orchestrator | ok: [testbed-manager] 2025-03-23 00:04:05.393663 | orchestrator | 2025-03-23 00:04:05.393678 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-03-23 00:04:05.393692 | orchestrator | Sunday 23 March 2025 00:03:59 +0000 (0:00:00.507) 0:00:07.785 ********** 2025-03-23 00:04:05.393705 | orchestrator | ok: [testbed-manager] 2025-03-23 00:04:05.393720 | orchestrator | 2025-03-23 00:04:05.393734 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:04:05.393748 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:04:05.393762 | orchestrator | 2025-03-23 00:04:05.393776 | orchestrator | 2025-03-23 00:04:05.393790 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:04:05.393804 | orchestrator | Sunday 23 March 2025 00:04:00 +0000 (0:00:00.327) 0:00:08.112 ********** 2025-03-23 00:04:05.393818 | orchestrator | =============================================================================== 2025-03-23 00:04:05.393832 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2025-03-23 00:04:05.393846 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.30s 2025-03-23 00:04:05.393860 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.14s 2025-03-23 00:04:05.393884 | orchestrator | Create .kube directory -------------------------------------------------- 1.07s 2025-03-23 00:04:05.393899 | orchestrator | Get home directory of operator user ------------------------------------- 0.76s 2025-03-23 00:04:05.393913 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-03-23 00:04:05.393927 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.54s 2025-03-23 00:04:05.393941 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.51s 2025-03-23 00:04:05.393955 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2025-03-23 00:04:05.393969 | orchestrator | 2025-03-23 00:04:05.393984 | orchestrator | 2025-03-23 00:04:05.393998 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-03-23 00:04:05.394011 | orchestrator | 2025-03-23 00:04:05.394068 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-23 00:04:05.394084 | orchestrator | Sunday 23 March 2025 00:01:17 +0000 (0:00:00.871) 0:00:00.871 ********** 2025-03-23 00:04:05.394098 | orchestrator | ok: [localhost] => { 2025-03-23 00:04:05.394113 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-03-23 00:04:05.394127 | orchestrator | } 2025-03-23 00:04:05.394141 | orchestrator | 2025-03-23 00:04:05.394155 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-03-23 00:04:05.394168 | orchestrator | Sunday 23 March 2025 00:01:18 +0000 (0:00:00.476) 0:00:01.347 ********** 2025-03-23 00:04:05.394183 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-03-23 00:04:05.394206 | orchestrator | ...ignoring 2025-03-23 00:04:05.394220 | orchestrator | 2025-03-23 00:04:05.394240 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-03-23 00:04:05.394254 | orchestrator | Sunday 23 March 2025 00:01:22 +0000 (0:00:04.153) 0:00:05.501 ********** 2025-03-23 00:04:05.394268 | orchestrator | skipping: [localhost] 2025-03-23 00:04:05.394282 | orchestrator | 2025-03-23 00:04:05.394296 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-03-23 00:04:05.394309 | orchestrator | Sunday 23 March 2025 00:01:22 +0000 (0:00:00.294) 0:00:05.796 ********** 2025-03-23 00:04:05.394323 | orchestrator | ok: [localhost] 2025-03-23 00:04:05.394337 | orchestrator | 2025-03-23 00:04:05.394351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 00:04:05.394365 | orchestrator | 2025-03-23 00:04:05.394378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 00:04:05.394392 | orchestrator | Sunday 23 March 2025 00:01:23 +0000 (0:00:00.854) 0:00:06.650 ********** 2025-03-23 00:04:05.394406 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:04:05.394420 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:04:05.394433 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:04:05.394447 | orchestrator | 2025-03-23 00:04:05.394461 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 00:04:05.394475 | orchestrator | Sunday 23 March 2025 00:01:24 +0000 (0:00:01.075) 0:00:07.725 ********** 2025-03-23 00:04:05.394489 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-03-23 00:04:05.394503 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-03-23 00:04:05.394517 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-03-23 00:04:05.394530 | orchestrator | 2025-03-23 00:04:05.394544 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-03-23 00:04:05.394558 | orchestrator | 2025-03-23 00:04:05.394572 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-23 00:04:05.394585 | orchestrator | Sunday 23 March 2025 00:01:25 +0000 (0:00:01.079) 0:00:08.804 ********** 2025-03-23 00:04:05.394600 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:04:05.394614 | orchestrator | 2025-03-23 00:04:05.394628 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-23 00:04:05.394657 | orchestrator | Sunday 23 March 2025 00:01:27 +0000 (0:00:02.066) 0:00:10.871 ********** 2025-03-23 00:04:05.394671 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:04:05.394686 | orchestrator | 2025-03-23 00:04:05.394700 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-03-23 00:04:05.394714 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:01.493) 0:00:12.365 ********** 2025-03-23 00:04:05.394728 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.394742 | orchestrator | 2025-03-23 00:04:05.394756 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-03-23 00:04:05.394770 | orchestrator | Sunday 23 March 2025 00:01:30 +0000 (0:00:00.618) 0:00:12.984 ********** 2025-03-23 00:04:05.394784 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.394797 | orchestrator | 2025-03-23 00:04:05.394811 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-03-23 00:04:05.394825 | orchestrator | Sunday 23 March 2025 00:01:30 +0000 (0:00:00.858) 0:00:13.842 ********** 2025-03-23 00:04:05.394838 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.394852 | orchestrator | 2025-03-23 00:04:05.394866 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-03-23 00:04:05.394880 | orchestrator | Sunday 23 March 2025 00:01:31 +0000 (0:00:00.570) 0:00:14.412 ********** 2025-03-23 00:04:05.394894 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.394908 | orchestrator | 2025-03-23 00:04:05.394922 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-23 00:04:05.394935 | orchestrator | Sunday 23 March 2025 00:01:31 +0000 (0:00:00.435) 0:00:14.848 ********** 2025-03-23 00:04:05.394956 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:04:05.394970 | orchestrator | 2025-03-23 00:04:05.394984 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-23 00:04:05.395005 | orchestrator | Sunday 23 March 2025 00:01:34 +0000 (0:00:02.911) 0:00:17.759 ********** 2025-03-23 00:04:05.395020 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:04:05.395034 | orchestrator | 2025-03-23 00:04:05.395048 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-03-23 00:04:05.395062 | orchestrator | Sunday 23 March 2025 00:01:35 +0000 (0:00:01.020) 0:00:18.780 ********** 2025-03-23 00:04:05.395076 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.395090 | orchestrator | 2025-03-23 00:04:05.395104 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-03-23 00:04:05.395118 | orchestrator | Sunday 23 March 2025 00:01:36 +0000 (0:00:00.622) 0:00:19.403 ********** 2025-03-23 00:04:05.395132 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.395145 | orchestrator | 2025-03-23 00:04:05.395159 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-03-23 00:04:05.395173 | orchestrator | Sunday 23 March 2025 00:01:36 +0000 (0:00:00.514) 0:00:19.917 ********** 2025-03-23 00:04:05.395191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.395211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.395227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.395248 | orchestrator | 2025-03-23 00:04:05.395268 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-03-23 00:04:05.395282 | orchestrator | Sunday 23 March 2025 00:01:38 +0000 (0:00:01.368) 0:00:21.286 ********** 2025-03-23 00:04:05.395306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.395323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.395338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.395353 | orchestrator | 2025-03-23 00:04:05.395373 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-03-23 00:04:05.395388 | orchestrator | Sunday 23 March 2025 00:01:40 +0000 (0:00:01.892) 0:00:23.178 ********** 2025-03-23 00:04:05.395402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-23 00:04:05.395416 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-23 00:04:05.395430 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-23 00:04:05.395445 | orchestrator | 2025-03-23 00:04:05.395459 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-03-23 00:04:05.395473 | orchestrator | Sunday 23 March 2025 00:01:42 +0000 (0:00:02.028) 0:00:25.206 ********** 2025-03-23 00:04:05.395487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-23 00:04:05.395501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-23 00:04:05.395515 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-23 00:04:05.395529 | orchestrator | 2025-03-23 00:04:05.395543 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-03-23 00:04:05.395562 | orchestrator | Sunday 23 March 2025 00:01:45 +0000 (0:00:03.487) 0:00:28.694 ********** 2025-03-23 00:04:05.395577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-23 00:04:05.395591 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-23 00:04:05.395605 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-23 00:04:05.395619 | orchestrator | 2025-03-23 00:04:05.395650 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-03-23 00:04:05.395664 | orchestrator | Sunday 23 March 2025 00:01:50 +0000 (0:00:04.361) 0:00:33.055 ********** 2025-03-23 00:04:05.395678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-23 00:04:05.395692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-23 00:04:05.395706 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-23 00:04:05.395721 | orchestrator | 2025-03-23 00:04:05.395735 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-03-23 00:04:05.395749 | orchestrator | Sunday 23 March 2025 00:01:53 +0000 (0:00:03.032) 0:00:36.088 ********** 2025-03-23 00:04:05.395763 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-23 00:04:05.395782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-23 00:04:05.395797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-23 00:04:05.395811 | orchestrator | 2025-03-23 00:04:05.395825 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-03-23 00:04:05.395839 | orchestrator | Sunday 23 March 2025 00:01:55 +0000 (0:00:02.253) 0:00:38.341 ********** 2025-03-23 00:04:05.395852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-23 00:04:05.395866 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-23 00:04:05.395881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-23 00:04:05.395895 | orchestrator | 2025-03-23 00:04:05.395909 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-23 00:04:05.395923 | orchestrator | Sunday 23 March 2025 00:01:58 +0000 (0:00:02.715) 0:00:41.057 ********** 2025-03-23 00:04:05.395937 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.395963 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:04:05.395977 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:04:05.395991 | orchestrator | 2025-03-23 00:04:05.396005 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-03-23 00:04:05.396019 | orchestrator | Sunday 23 March 2025 00:01:59 +0000 (0:00:01.865) 0:00:42.922 ********** 2025-03-23 00:04:05.396033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.396056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.396072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:04:05.396087 | orchestrator | 2025-03-23 00:04:05.396101 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-03-23 00:04:05.396115 | orchestrator | Sunday 23 March 2025 00:02:03 +0000 (0:00:03.078) 0:00:46.001 ********** 2025-03-23 00:04:05.396129 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:04:05.396142 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:04:05.396156 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:04:05.396177 | orchestrator | 2025-03-23 00:04:05.396191 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-03-23 00:04:05.396205 | orchestrator | Sunday 23 March 2025 00:02:05 +0000 (0:00:02.185) 0:00:48.186 ********** 2025-03-23 00:04:05.396219 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:04:05.396233 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:04:05.396247 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:04:05.396261 | orchestrator | 2025-03-23 00:04:05.396274 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-03-23 00:04:05.396288 | orchestrator | Sunday 23 March 2025 00:02:13 +0000 (0:00:08.198) 0:00:56.384 ********** 2025-03-23 00:04:05.396302 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:04:05.396316 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:04:05.396330 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:04:05.396344 | orchestrator | 2025-03-23 00:04:05.396358 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-23 00:04:05.396372 | orchestrator | 2025-03-23 00:04:05.396386 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-23 00:04:05.396399 | orchestrator | Sunday 23 March 2025 00:02:14 +0000 (0:00:00.735) 0:00:57.119 ********** 2025-03-23 00:04:05.396413 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:04:05.396427 | orchestrator | 2025-03-23 00:04:05.396441 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-23 00:04:05.396455 | orchestrator | Sunday 23 March 2025 00:02:14 +0000 (0:00:00.754) 0:00:57.874 ********** 2025-03-23 00:04:05.396469 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:04:05.396483 | orchestrator | 2025-03-23 00:04:05.396508 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-23 00:04:05.396523 | orchestrator | Sunday 23 March 2025 00:02:15 +0000 (0:00:00.449) 0:00:58.324 ********** 2025-03-23 00:04:05.396538 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:04:05.396552 | orchestrator | 2025-03-23 00:04:05.396565 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-23 00:04:05.396579 | orchestrator | Sunday 23 March 2025 00:02:23 +0000 (0:00:08.151) 0:01:06.475 ********** 2025-03-23 00:04:05.396593 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:04:05.396607 | orchestrator | 2025-03-23 00:04:05.396621 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-23 00:04:05.396662 | orchestrator | 2025-03-23 00:04:05.396677 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-23 00:04:05.396691 | orchestrator | Sunday 23 March 2025 00:03:15 +0000 (0:00:51.680) 0:01:58.156 ********** 2025-03-23 00:04:05.396705 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:04:05.396719 | orchestrator | 2025-03-23 00:04:05.396733 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-23 00:04:05.396747 | orchestrator | Sunday 23 March 2025 00:03:16 +0000 (0:00:00.827) 0:01:58.984 ********** 2025-03-23 00:04:05.396761 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:04:05.396775 | orchestrator | 2025-03-23 00:04:05.396789 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-23 00:04:05.396803 | orchestrator | Sunday 23 March 2025 00:03:16 +0000 (0:00:00.337) 0:01:59.322 ********** 2025-03-23 00:04:05.396817 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:04:05.396830 | orchestrator | 2025-03-23 00:04:05.396844 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-23 00:04:05.396858 | orchestrator | Sunday 23 March 2025 00:03:24 +0000 (0:00:08.007) 0:02:07.329 ********** 2025-03-23 00:04:05.396872 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:04:05.396886 | orchestrator | 2025-03-23 00:04:05.396900 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-23 00:04:05.396914 | orchestrator | 2025-03-23 00:04:05.396928 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-23 00:04:05.396942 | orchestrator | Sunday 23 March 2025 00:03:35 +0000 (0:00:11.575) 0:02:18.908 ********** 2025-03-23 00:04:05.396963 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:04:05.396977 | orchestrator | 2025-03-23 00:04:05.396997 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-23 00:04:05.397012 | orchestrator | Sunday 23 March 2025 00:03:36 +0000 (0:00:00.981) 0:02:19.889 ********** 2025-03-23 00:04:05.397026 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:04:05.397039 | orchestrator | 2025-03-23 00:04:05.397053 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-23 00:04:05.397067 | orchestrator | Sunday 23 March 2025 00:03:37 +0000 (0:00:00.566) 0:02:20.456 ********** 2025-03-23 00:04:05.397081 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:04:05.397095 | orchestrator | 2025-03-23 00:04:05.397109 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-23 00:04:05.397123 | orchestrator | Sunday 23 March 2025 00:03:46 +0000 (0:00:08.717) 0:02:29.173 ********** 2025-03-23 00:04:05.397137 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:04:05.397151 | orchestrator | 2025-03-23 00:04:05.397165 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-03-23 00:04:05.397179 | orchestrator | 2025-03-23 00:04:05.397193 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-03-23 00:04:05.397207 | orchestrator | Sunday 23 March 2025 00:03:58 +0000 (0:00:12.060) 0:02:41.234 ********** 2025-03-23 00:04:05.397221 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:04:05.397235 | orchestrator | 2025-03-23 00:04:05.397249 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-03-23 00:04:05.397262 | orchestrator | Sunday 23 March 2025 00:03:59 +0000 (0:00:01.236) 0:02:42.473 ********** 2025-03-23 00:04:05.397276 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-23 00:04:05.397290 | orchestrator | enable_outward_rabbitmq_True 2025-03-23 00:04:05.397305 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-23 00:04:05.397319 | orchestrator | outward_rabbitmq_restart 2025-03-23 00:04:05.397333 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:04:05.397347 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:04:05.397361 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:04:05.397375 | orchestrator | 2025-03-23 00:04:05.397389 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-03-23 00:04:05.397403 | orchestrator | skipping: no hosts matched 2025-03-23 00:04:05.397417 | orchestrator | 2025-03-23 00:04:05.397431 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-03-23 00:04:05.397445 | orchestrator | skipping: no hosts matched 2025-03-23 00:04:05.397458 | orchestrator | 2025-03-23 00:04:05.397472 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-03-23 00:04:05.397486 | orchestrator | skipping: no hosts matched 2025-03-23 00:04:05.397500 | orchestrator | 2025-03-23 00:04:05.397519 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:04:05.397534 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-23 00:04:05.397548 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-23 00:04:05.397562 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 00:04:05.397576 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-23 00:04:05.397590 | orchestrator | 2025-03-23 00:04:05.397604 | orchestrator | 2025-03-23 00:04:05.397618 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:04:05.397650 | orchestrator | Sunday 23 March 2025 00:04:02 +0000 (0:00:03.269) 0:02:45.742 ********** 2025-03-23 00:04:05.397664 | orchestrator | =============================================================================== 2025-03-23 00:04:05.397685 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 75.32s 2025-03-23 00:04:05.397699 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 24.88s 2025-03-23 00:04:05.397713 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.20s 2025-03-23 00:04:05.397727 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 4.36s 2025-03-23 00:04:05.397741 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.15s 2025-03-23 00:04:05.397755 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.49s 2025-03-23 00:04:05.397768 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.27s 2025-03-23 00:04:05.397783 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 3.08s 2025-03-23 00:04:05.397797 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.03s 2025-03-23 00:04:05.397811 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.91s 2025-03-23 00:04:05.397825 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.73s 2025-03-23 00:04:05.397838 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.56s 2025-03-23 00:04:05.397852 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.25s 2025-03-23 00:04:05.397866 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 2.19s 2025-03-23 00:04:05.397880 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.07s 2025-03-23 00:04:05.397894 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.03s 2025-03-23 00:04:05.397908 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.89s 2025-03-23 00:04:05.397927 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.86s 2025-03-23 00:04:08.455131 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.49s 2025-03-23 00:04:08.455248 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.37s 2025-03-23 00:04:08.455266 | orchestrator | 2025-03-23 00:04:05 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:08.455282 | orchestrator | 2025-03-23 00:04:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:08.455313 | orchestrator | 2025-03-23 00:04:08 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:08.458670 | orchestrator | 2025-03-23 00:04:08 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:08.458968 | orchestrator | 2025-03-23 00:04:08 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:08.458999 | orchestrator | 2025-03-23 00:04:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:11.511922 | orchestrator | 2025-03-23 00:04:11 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:14.563530 | orchestrator | 2025-03-23 00:04:11 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:14.563708 | orchestrator | 2025-03-23 00:04:11 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:14.563732 | orchestrator | 2025-03-23 00:04:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:14.563765 | orchestrator | 2025-03-23 00:04:14 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:14.565862 | orchestrator | 2025-03-23 00:04:14 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:14.567555 | orchestrator | 2025-03-23 00:04:14 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:17.608276 | orchestrator | 2025-03-23 00:04:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:17.608425 | orchestrator | 2025-03-23 00:04:17 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:17.616160 | orchestrator | 2025-03-23 00:04:17 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:17.616774 | orchestrator | 2025-03-23 00:04:17 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:20.655013 | orchestrator | 2025-03-23 00:04:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:20.655138 | orchestrator | 2025-03-23 00:04:20 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:20.656530 | orchestrator | 2025-03-23 00:04:20 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:20.657111 | orchestrator | 2025-03-23 00:04:20 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:20.657211 | orchestrator | 2025-03-23 00:04:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:23.708380 | orchestrator | 2025-03-23 00:04:23 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:23.708891 | orchestrator | 2025-03-23 00:04:23 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:23.711691 | orchestrator | 2025-03-23 00:04:23 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:26.762546 | orchestrator | 2025-03-23 00:04:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:26.762712 | orchestrator | 2025-03-23 00:04:26 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:29.810945 | orchestrator | 2025-03-23 00:04:26 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:29.811083 | orchestrator | 2025-03-23 00:04:26 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:29.811105 | orchestrator | 2025-03-23 00:04:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:29.811139 | orchestrator | 2025-03-23 00:04:29 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:29.811881 | orchestrator | 2025-03-23 00:04:29 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:29.815953 | orchestrator | 2025-03-23 00:04:29 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:32.868334 | orchestrator | 2025-03-23 00:04:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:32.868476 | orchestrator | 2025-03-23 00:04:32 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:32.869824 | orchestrator | 2025-03-23 00:04:32 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:32.869859 | orchestrator | 2025-03-23 00:04:32 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:35.933313 | orchestrator | 2025-03-23 00:04:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:35.933448 | orchestrator | 2025-03-23 00:04:35 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:35.937146 | orchestrator | 2025-03-23 00:04:35 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:35.940929 | orchestrator | 2025-03-23 00:04:35 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:35.942182 | orchestrator | 2025-03-23 00:04:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:38.991302 | orchestrator | 2025-03-23 00:04:38 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:38.992778 | orchestrator | 2025-03-23 00:04:38 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:38.994425 | orchestrator | 2025-03-23 00:04:38 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:42.049438 | orchestrator | 2025-03-23 00:04:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:42.049554 | orchestrator | 2025-03-23 00:04:42 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:42.051605 | orchestrator | 2025-03-23 00:04:42 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:42.051684 | orchestrator | 2025-03-23 00:04:42 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:45.095230 | orchestrator | 2025-03-23 00:04:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:45.095356 | orchestrator | 2025-03-23 00:04:45 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:45.096162 | orchestrator | 2025-03-23 00:04:45 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:45.097835 | orchestrator | 2025-03-23 00:04:45 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:45.097981 | orchestrator | 2025-03-23 00:04:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:48.155099 | orchestrator | 2025-03-23 00:04:48 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:48.158304 | orchestrator | 2025-03-23 00:04:48 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:48.160078 | orchestrator | 2025-03-23 00:04:48 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:51.212719 | orchestrator | 2025-03-23 00:04:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:51.212848 | orchestrator | 2025-03-23 00:04:51 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:51.213756 | orchestrator | 2025-03-23 00:04:51 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:51.214968 | orchestrator | 2025-03-23 00:04:51 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:54.260092 | orchestrator | 2025-03-23 00:04:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:54.260255 | orchestrator | 2025-03-23 00:04:54 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:57.301231 | orchestrator | 2025-03-23 00:04:54 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:57.301334 | orchestrator | 2025-03-23 00:04:54 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:04:57.301353 | orchestrator | 2025-03-23 00:04:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:04:57.301385 | orchestrator | 2025-03-23 00:04:57 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:04:57.302340 | orchestrator | 2025-03-23 00:04:57 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:04:57.304360 | orchestrator | 2025-03-23 00:04:57 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:00.363472 | orchestrator | 2025-03-23 00:04:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:00.363611 | orchestrator | 2025-03-23 00:05:00 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:00.368828 | orchestrator | 2025-03-23 00:05:00 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:00.371118 | orchestrator | 2025-03-23 00:05:00 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:00.373310 | orchestrator | 2025-03-23 00:05:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:03.419373 | orchestrator | 2025-03-23 00:05:03 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:03.420139 | orchestrator | 2025-03-23 00:05:03 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:03.422719 | orchestrator | 2025-03-23 00:05:03 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:03.423718 | orchestrator | 2025-03-23 00:05:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:06.474301 | orchestrator | 2025-03-23 00:05:06 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:06.477211 | orchestrator | 2025-03-23 00:05:06 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:06.478171 | orchestrator | 2025-03-23 00:05:06 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:09.533754 | orchestrator | 2025-03-23 00:05:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:09.533878 | orchestrator | 2025-03-23 00:05:09 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:09.537835 | orchestrator | 2025-03-23 00:05:09 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:09.537872 | orchestrator | 2025-03-23 00:05:09 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:09.537981 | orchestrator | 2025-03-23 00:05:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:12.601974 | orchestrator | 2025-03-23 00:05:12 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:12.605786 | orchestrator | 2025-03-23 00:05:12 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:12.608256 | orchestrator | 2025-03-23 00:05:12 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:12.608595 | orchestrator | 2025-03-23 00:05:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:15.654708 | orchestrator | 2025-03-23 00:05:15 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:15.655868 | orchestrator | 2025-03-23 00:05:15 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:15.656367 | orchestrator | 2025-03-23 00:05:15 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:18.703083 | orchestrator | 2025-03-23 00:05:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:18.703214 | orchestrator | 2025-03-23 00:05:18 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:18.703813 | orchestrator | 2025-03-23 00:05:18 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:18.705216 | orchestrator | 2025-03-23 00:05:18 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:18.705764 | orchestrator | 2025-03-23 00:05:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:21.748606 | orchestrator | 2025-03-23 00:05:21 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:21.751487 | orchestrator | 2025-03-23 00:05:21 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:21.752813 | orchestrator | 2025-03-23 00:05:21 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:24.801898 | orchestrator | 2025-03-23 00:05:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:24.802089 | orchestrator | 2025-03-23 00:05:24 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:24.804112 | orchestrator | 2025-03-23 00:05:24 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:24.805446 | orchestrator | 2025-03-23 00:05:24 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:27.860762 | orchestrator | 2025-03-23 00:05:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:27.860898 | orchestrator | 2025-03-23 00:05:27 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:27.866146 | orchestrator | 2025-03-23 00:05:27 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:27.866961 | orchestrator | 2025-03-23 00:05:27 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:27.870287 | orchestrator | 2025-03-23 00:05:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:30.917315 | orchestrator | 2025-03-23 00:05:30 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:33.961755 | orchestrator | 2025-03-23 00:05:30 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:33.961874 | orchestrator | 2025-03-23 00:05:30 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:33.961894 | orchestrator | 2025-03-23 00:05:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:33.961927 | orchestrator | 2025-03-23 00:05:33 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:33.962879 | orchestrator | 2025-03-23 00:05:33 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:33.963875 | orchestrator | 2025-03-23 00:05:33 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:33.964138 | orchestrator | 2025-03-23 00:05:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:37.003407 | orchestrator | 2025-03-23 00:05:37 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state STARTED 2025-03-23 00:05:37.010508 | orchestrator | 2025-03-23 00:05:37 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:37.014857 | orchestrator | 2025-03-23 00:05:37 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:37.015304 | orchestrator | 2025-03-23 00:05:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:40.072006 | orchestrator | 2025-03-23 00:05:40.072107 | orchestrator | 2025-03-23 00:05:40.072125 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 00:05:40.072141 | orchestrator | 2025-03-23 00:05:40.072155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 00:05:40.072171 | orchestrator | Sunday 23 March 2025 00:02:36 +0000 (0:00:00.263) 0:00:00.263 ********** 2025-03-23 00:05:40.072185 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:05:40.072200 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:05:40.072215 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:05:40.072229 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.072244 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.072258 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.072299 | orchestrator | 2025-03-23 00:05:40.072314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 00:05:40.072328 | orchestrator | Sunday 23 March 2025 00:02:38 +0000 (0:00:01.527) 0:00:01.791 ********** 2025-03-23 00:05:40.072342 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-03-23 00:05:40.072356 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-03-23 00:05:40.072370 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-03-23 00:05:40.072384 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-03-23 00:05:40.072546 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-03-23 00:05:40.072565 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-03-23 00:05:40.072580 | orchestrator | 2025-03-23 00:05:40.072594 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-03-23 00:05:40.072608 | orchestrator | 2025-03-23 00:05:40.072622 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-03-23 00:05:40.072636 | orchestrator | Sunday 23 March 2025 00:02:40 +0000 (0:00:02.130) 0:00:03.922 ********** 2025-03-23 00:05:40.072680 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:05:40.072696 | orchestrator | 2025-03-23 00:05:40.072724 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-03-23 00:05:40.072739 | orchestrator | Sunday 23 March 2025 00:02:43 +0000 (0:00:03.051) 0:00:06.973 ********** 2025-03-23 00:05:40.072756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072870 | orchestrator | 2025-03-23 00:05:40.072898 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-03-23 00:05:40.072913 | orchestrator | Sunday 23 March 2025 00:02:44 +0000 (0:00:01.509) 0:00:08.483 ********** 2025-03-23 00:05:40.072928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.072991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073019 | orchestrator | 2025-03-23 00:05:40.073034 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-03-23 00:05:40.073048 | orchestrator | Sunday 23 March 2025 00:02:49 +0000 (0:00:04.516) 0:00:12.999 ********** 2025-03-23 00:05:40.073062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073168 | orchestrator | 2025-03-23 00:05:40.073185 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-03-23 00:05:40.073201 | orchestrator | Sunday 23 March 2025 00:02:51 +0000 (0:00:02.393) 0:00:15.393 ********** 2025-03-23 00:05:40.073218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073324 | orchestrator | 2025-03-23 00:05:40.073345 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-03-23 00:05:40.073363 | orchestrator | Sunday 23 March 2025 00:02:54 +0000 (0:00:02.643) 0:00:18.036 ********** 2025-03-23 00:05:40.073384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.073483 | orchestrator | 2025-03-23 00:05:40.073499 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-03-23 00:05:40.073692 | orchestrator | Sunday 23 March 2025 00:02:57 +0000 (0:00:02.666) 0:00:20.702 ********** 2025-03-23 00:05:40.073717 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:05:40.073732 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.073747 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.073761 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.073775 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:05:40.073788 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:05:40.073802 | orchestrator | 2025-03-23 00:05:40.073816 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-03-23 00:05:40.073830 | orchestrator | Sunday 23 March 2025 00:03:00 +0000 (0:00:03.857) 0:00:24.560 ********** 2025-03-23 00:05:40.073844 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-03-23 00:05:40.073860 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-03-23 00:05:40.073874 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-03-23 00:05:40.073888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-03-23 00:05:40.073902 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-03-23 00:05:40.073916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-03-23 00:05:40.073930 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 00:05:40.073944 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 00:05:40.073965 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 00:05:40.073980 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 00:05:40.073994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 00:05:40.074008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-23 00:05:40.074073 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 00:05:40.074093 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 00:05:40.074108 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 00:05:40.074122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 00:05:40.074136 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 00:05:40.074150 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-23 00:05:40.074164 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 00:05:40.074179 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 00:05:40.074194 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 00:05:40.074208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 00:05:40.074221 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 00:05:40.074236 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-23 00:05:40.074259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 00:05:40.074273 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 00:05:40.074287 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 00:05:40.074301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 00:05:40.074315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 00:05:40.074331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-23 00:05:40.074346 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 00:05:40.074364 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 00:05:40.074380 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 00:05:40.074396 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 00:05:40.074412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 00:05:40.074429 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-23 00:05:40.074446 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-23 00:05:40.074462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-23 00:05:40.074478 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-23 00:05:40.074495 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-23 00:05:40.074511 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-23 00:05:40.074527 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-23 00:05:40.074544 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-03-23 00:05:40.074561 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-03-23 00:05:40.074585 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-03-23 00:05:40.074602 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-03-23 00:05:40.074618 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-03-23 00:05:40.074634 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-03-23 00:05:40.074733 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-23 00:05:40.074750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-23 00:05:40.074765 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-23 00:05:40.074779 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-23 00:05:40.074801 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-23 00:05:40.074815 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-23 00:05:40.074829 | orchestrator | 2025-03-23 00:05:40.074843 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 00:05:40.074857 | orchestrator | Sunday 23 March 2025 00:03:21 +0000 (0:00:20.964) 0:00:45.524 ********** 2025-03-23 00:05:40.074871 | orchestrator | 2025-03-23 00:05:40.074885 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 00:05:40.074899 | orchestrator | Sunday 23 March 2025 00:03:21 +0000 (0:00:00.059) 0:00:45.583 ********** 2025-03-23 00:05:40.074913 | orchestrator | 2025-03-23 00:05:40.074927 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 00:05:40.074941 | orchestrator | Sunday 23 March 2025 00:03:22 +0000 (0:00:00.169) 0:00:45.752 ********** 2025-03-23 00:05:40.074955 | orchestrator | 2025-03-23 00:05:40.074975 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 00:05:40.074990 | orchestrator | Sunday 23 March 2025 00:03:23 +0000 (0:00:00.886) 0:00:46.639 ********** 2025-03-23 00:05:40.075004 | orchestrator | 2025-03-23 00:05:40.075018 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 00:05:40.075031 | orchestrator | Sunday 23 March 2025 00:03:23 +0000 (0:00:00.058) 0:00:46.697 ********** 2025-03-23 00:05:40.075046 | orchestrator | 2025-03-23 00:05:40.075059 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-23 00:05:40.075073 | orchestrator | Sunday 23 March 2025 00:03:23 +0000 (0:00:00.055) 0:00:46.752 ********** 2025-03-23 00:05:40.075087 | orchestrator | 2025-03-23 00:05:40.075101 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-03-23 00:05:40.075115 | orchestrator | Sunday 23 March 2025 00:03:23 +0000 (0:00:00.083) 0:00:46.836 ********** 2025-03-23 00:05:40.075129 | orchestrator | ok: [testbed-node-4] 2025-03-23 00:05:40.075143 | orchestrator | ok: [testbed-node-3] 2025-03-23 00:05:40.075156 | orchestrator | ok: [testbed-node-5] 2025-03-23 00:05:40.075171 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.075185 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.075198 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.075212 | orchestrator | 2025-03-23 00:05:40.075227 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-03-23 00:05:40.075240 | orchestrator | Sunday 23 March 2025 00:03:26 +0000 (0:00:02.898) 0:00:49.734 ********** 2025-03-23 00:05:40.075253 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.075265 | orchestrator | changed: [testbed-node-4] 2025-03-23 00:05:40.075287 | orchestrator | changed: [testbed-node-5] 2025-03-23 00:05:40.075300 | orchestrator | changed: [testbed-node-3] 2025-03-23 00:05:40.075312 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.075324 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.075337 | orchestrator | 2025-03-23 00:05:40.075349 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-03-23 00:05:40.075361 | orchestrator | 2025-03-23 00:05:40.075374 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-23 00:05:40.075386 | orchestrator | Sunday 23 March 2025 00:03:52 +0000 (0:00:26.192) 0:01:15.926 ********** 2025-03-23 00:05:40.075399 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:05:40.075412 | orchestrator | 2025-03-23 00:05:40.075424 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-23 00:05:40.075437 | orchestrator | Sunday 23 March 2025 00:03:53 +0000 (0:00:01.303) 0:01:17.230 ********** 2025-03-23 00:05:40.075449 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:05:40.075462 | orchestrator | 2025-03-23 00:05:40.075474 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-03-23 00:05:40.075493 | orchestrator | Sunday 23 March 2025 00:03:55 +0000 (0:00:01.658) 0:01:18.888 ********** 2025-03-23 00:05:40.075505 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.075518 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.075531 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.075543 | orchestrator | 2025-03-23 00:05:40.075555 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-03-23 00:05:40.075567 | orchestrator | Sunday 23 March 2025 00:03:56 +0000 (0:00:01.177) 0:01:20.066 ********** 2025-03-23 00:05:40.075580 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.075592 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.075605 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.075623 | orchestrator | 2025-03-23 00:05:40.075636 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-03-23 00:05:40.075664 | orchestrator | Sunday 23 March 2025 00:03:58 +0000 (0:00:01.641) 0:01:21.707 ********** 2025-03-23 00:05:40.075677 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.075689 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.075702 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.075714 | orchestrator | 2025-03-23 00:05:40.075727 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-03-23 00:05:40.075740 | orchestrator | Sunday 23 March 2025 00:03:59 +0000 (0:00:01.004) 0:01:22.712 ********** 2025-03-23 00:05:40.075752 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.075764 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.075777 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.075789 | orchestrator | 2025-03-23 00:05:40.075802 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-03-23 00:05:40.075814 | orchestrator | Sunday 23 March 2025 00:04:00 +0000 (0:00:01.298) 0:01:24.010 ********** 2025-03-23 00:05:40.075827 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.075839 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.075851 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.075864 | orchestrator | 2025-03-23 00:05:40.075881 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-03-23 00:05:40.075894 | orchestrator | Sunday 23 March 2025 00:04:00 +0000 (0:00:00.567) 0:01:24.577 ********** 2025-03-23 00:05:40.075907 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.075919 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.075932 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.075944 | orchestrator | 2025-03-23 00:05:40.075956 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-03-23 00:05:40.075969 | orchestrator | Sunday 23 March 2025 00:04:01 +0000 (0:00:00.533) 0:01:25.111 ********** 2025-03-23 00:05:40.075981 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.075994 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076006 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076019 | orchestrator | 2025-03-23 00:05:40.076032 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-03-23 00:05:40.076045 | orchestrator | Sunday 23 March 2025 00:04:01 +0000 (0:00:00.479) 0:01:25.591 ********** 2025-03-23 00:05:40.076057 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076070 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076082 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076095 | orchestrator | 2025-03-23 00:05:40.076107 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-03-23 00:05:40.076124 | orchestrator | Sunday 23 March 2025 00:04:02 +0000 (0:00:00.568) 0:01:26.160 ********** 2025-03-23 00:05:40.076137 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076149 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076162 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076174 | orchestrator | 2025-03-23 00:05:40.076187 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-03-23 00:05:40.076200 | orchestrator | Sunday 23 March 2025 00:04:03 +0000 (0:00:00.711) 0:01:26.871 ********** 2025-03-23 00:05:40.076218 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076231 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076243 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076256 | orchestrator | 2025-03-23 00:05:40.076268 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-03-23 00:05:40.076281 | orchestrator | Sunday 23 March 2025 00:04:03 +0000 (0:00:00.564) 0:01:27.435 ********** 2025-03-23 00:05:40.076293 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076306 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076318 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076331 | orchestrator | 2025-03-23 00:05:40.076343 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-03-23 00:05:40.076356 | orchestrator | Sunday 23 March 2025 00:04:04 +0000 (0:00:00.545) 0:01:27.981 ********** 2025-03-23 00:05:40.076368 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076381 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076393 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076405 | orchestrator | 2025-03-23 00:05:40.076418 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-03-23 00:05:40.076430 | orchestrator | Sunday 23 March 2025 00:04:04 +0000 (0:00:00.348) 0:01:28.330 ********** 2025-03-23 00:05:40.076443 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076455 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076468 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076480 | orchestrator | 2025-03-23 00:05:40.076493 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-03-23 00:05:40.076505 | orchestrator | Sunday 23 March 2025 00:04:05 +0000 (0:00:00.604) 0:01:28.934 ********** 2025-03-23 00:05:40.076518 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076530 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076542 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076555 | orchestrator | 2025-03-23 00:05:40.076567 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-03-23 00:05:40.076580 | orchestrator | Sunday 23 March 2025 00:04:05 +0000 (0:00:00.463) 0:01:29.398 ********** 2025-03-23 00:05:40.076592 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076604 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076617 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076629 | orchestrator | 2025-03-23 00:05:40.076656 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-03-23 00:05:40.076669 | orchestrator | Sunday 23 March 2025 00:04:06 +0000 (0:00:00.346) 0:01:29.744 ********** 2025-03-23 00:05:40.076681 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076694 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076707 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076726 | orchestrator | 2025-03-23 00:05:40.076739 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-03-23 00:05:40.076752 | orchestrator | Sunday 23 March 2025 00:04:06 +0000 (0:00:00.607) 0:01:30.352 ********** 2025-03-23 00:05:40.076765 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.076779 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.076797 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.076811 | orchestrator | 2025-03-23 00:05:40.076823 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-23 00:05:40.076836 | orchestrator | Sunday 23 March 2025 00:04:07 +0000 (0:00:00.642) 0:01:30.995 ********** 2025-03-23 00:05:40.076852 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:05:40.076865 | orchestrator | 2025-03-23 00:05:40.076878 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-03-23 00:05:40.076890 | orchestrator | Sunday 23 March 2025 00:04:09 +0000 (0:00:01.647) 0:01:32.642 ********** 2025-03-23 00:05:40.076903 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.076921 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.076934 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.076947 | orchestrator | 2025-03-23 00:05:40.076959 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-03-23 00:05:40.076971 | orchestrator | Sunday 23 March 2025 00:04:10 +0000 (0:00:01.150) 0:01:33.793 ********** 2025-03-23 00:05:40.076984 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.076996 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.077009 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.077021 | orchestrator | 2025-03-23 00:05:40.077034 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-03-23 00:05:40.077046 | orchestrator | Sunday 23 March 2025 00:04:11 +0000 (0:00:01.455) 0:01:35.249 ********** 2025-03-23 00:05:40.077058 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.077071 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.077083 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.077096 | orchestrator | 2025-03-23 00:05:40.077108 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-03-23 00:05:40.077121 | orchestrator | Sunday 23 March 2025 00:04:12 +0000 (0:00:00.693) 0:01:35.942 ********** 2025-03-23 00:05:40.077133 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.077145 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.077158 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.077170 | orchestrator | 2025-03-23 00:05:40.077183 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-03-23 00:05:40.077195 | orchestrator | Sunday 23 March 2025 00:04:13 +0000 (0:00:01.334) 0:01:37.276 ********** 2025-03-23 00:05:40.077208 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.077220 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.077232 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.077245 | orchestrator | 2025-03-23 00:05:40.077257 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-03-23 00:05:40.077270 | orchestrator | Sunday 23 March 2025 00:04:14 +0000 (0:00:00.602) 0:01:37.879 ********** 2025-03-23 00:05:40.077282 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.077295 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.077307 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.077319 | orchestrator | 2025-03-23 00:05:40.077336 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-03-23 00:05:40.077349 | orchestrator | Sunday 23 March 2025 00:04:14 +0000 (0:00:00.554) 0:01:38.433 ********** 2025-03-23 00:05:40.077362 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.077374 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.077386 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.077399 | orchestrator | 2025-03-23 00:05:40.077412 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-03-23 00:05:40.077424 | orchestrator | Sunday 23 March 2025 00:04:15 +0000 (0:00:00.361) 0:01:38.795 ********** 2025-03-23 00:05:40.077437 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.077449 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.077461 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.077474 | orchestrator | 2025-03-23 00:05:40.077486 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-23 00:05:40.077499 | orchestrator | Sunday 23 March 2025 00:04:15 +0000 (0:00:00.655) 0:01:39.450 ********** 2025-03-23 00:05:40.077512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077693 | orchestrator | 2025-03-23 00:05:40.077706 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-23 00:05:40.077718 | orchestrator | Sunday 23 March 2025 00:04:18 +0000 (0:00:02.662) 0:01:42.113 ********** 2025-03-23 00:05:40.077731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077846 | orchestrator | 2025-03-23 00:05:40.077856 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-23 00:05:40.077866 | orchestrator | Sunday 23 March 2025 00:04:24 +0000 (0:00:05.940) 0:01:48.053 ********** 2025-03-23 00:05:40.077877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.077981 | orchestrator | 2025-03-23 00:05:40.077991 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 00:05:40.078002 | orchestrator | Sunday 23 March 2025 00:04:27 +0000 (0:00:03.336) 0:01:51.389 ********** 2025-03-23 00:05:40.078012 | orchestrator | 2025-03-23 00:05:40.078050 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 00:05:40.078060 | orchestrator | Sunday 23 March 2025 00:04:27 +0000 (0:00:00.082) 0:01:51.472 ********** 2025-03-23 00:05:40.078071 | orchestrator | 2025-03-23 00:05:40.078081 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 00:05:40.078091 | orchestrator | Sunday 23 March 2025 00:04:28 +0000 (0:00:00.237) 0:01:51.709 ********** 2025-03-23 00:05:40.078101 | orchestrator | 2025-03-23 00:05:40.078112 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-23 00:05:40.078122 | orchestrator | Sunday 23 March 2025 00:04:28 +0000 (0:00:00.082) 0:01:51.792 ********** 2025-03-23 00:05:40.078141 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.078151 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.078161 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.078171 | orchestrator | 2025-03-23 00:05:40.078182 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-23 00:05:40.078192 | orchestrator | Sunday 23 March 2025 00:04:31 +0000 (0:00:03.529) 0:01:55.321 ********** 2025-03-23 00:05:40.078202 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.078212 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.078223 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.078233 | orchestrator | 2025-03-23 00:05:40.078243 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-23 00:05:40.078253 | orchestrator | Sunday 23 March 2025 00:04:39 +0000 (0:00:08.041) 0:02:03.362 ********** 2025-03-23 00:05:40.078263 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.078274 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.078284 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.078294 | orchestrator | 2025-03-23 00:05:40.078304 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-23 00:05:40.078315 | orchestrator | Sunday 23 March 2025 00:04:47 +0000 (0:00:07.963) 0:02:11.325 ********** 2025-03-23 00:05:40.078325 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.078335 | orchestrator | 2025-03-23 00:05:40.078345 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-23 00:05:40.078355 | orchestrator | Sunday 23 March 2025 00:04:47 +0000 (0:00:00.128) 0:02:11.454 ********** 2025-03-23 00:05:40.078365 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.078375 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.078385 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.078395 | orchestrator | 2025-03-23 00:05:40.078406 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-23 00:05:40.078416 | orchestrator | Sunday 23 March 2025 00:04:49 +0000 (0:00:01.270) 0:02:12.724 ********** 2025-03-23 00:05:40.078426 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.078436 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.078447 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.078457 | orchestrator | 2025-03-23 00:05:40.078467 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-23 00:05:40.078477 | orchestrator | Sunday 23 March 2025 00:04:49 +0000 (0:00:00.802) 0:02:13.526 ********** 2025-03-23 00:05:40.078487 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.078497 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.078507 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.078518 | orchestrator | 2025-03-23 00:05:40.078528 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-23 00:05:40.078538 | orchestrator | Sunday 23 March 2025 00:04:50 +0000 (0:00:00.775) 0:02:14.302 ********** 2025-03-23 00:05:40.078548 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.078558 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.078568 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.078578 | orchestrator | 2025-03-23 00:05:40.078595 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-23 00:05:40.078606 | orchestrator | Sunday 23 March 2025 00:04:51 +0000 (0:00:00.718) 0:02:15.020 ********** 2025-03-23 00:05:40.078616 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.078630 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.078661 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.078672 | orchestrator | 2025-03-23 00:05:40.078683 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-23 00:05:40.078693 | orchestrator | Sunday 23 March 2025 00:04:52 +0000 (0:00:01.176) 0:02:16.196 ********** 2025-03-23 00:05:40.078703 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.078713 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.078723 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.078739 | orchestrator | 2025-03-23 00:05:40.078750 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-03-23 00:05:40.078760 | orchestrator | Sunday 23 March 2025 00:04:53 +0000 (0:00:01.084) 0:02:17.281 ********** 2025-03-23 00:05:40.078770 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.078780 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.078790 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.078800 | orchestrator | 2025-03-23 00:05:40.078811 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-23 00:05:40.078821 | orchestrator | Sunday 23 March 2025 00:04:53 +0000 (0:00:00.298) 0:02:17.580 ********** 2025-03-23 00:05:40.078831 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078846 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078882 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078893 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078903 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078914 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078928 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078945 | orchestrator | 2025-03-23 00:05:40.078955 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-23 00:05:40.078966 | orchestrator | Sunday 23 March 2025 00:04:55 +0000 (0:00:01.752) 0:02:19.332 ********** 2025-03-23 00:05:40.078976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078987 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.078997 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079007 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079038 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079079 | orchestrator | 2025-03-23 00:05:40.079089 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-23 00:05:40.079100 | orchestrator | Sunday 23 March 2025 00:05:01 +0000 (0:00:05.503) 0:02:24.836 ********** 2025-03-23 00:05:40.079114 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079136 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079146 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079156 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079166 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079177 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079198 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-23 00:05:40.079214 | orchestrator | 2025-03-23 00:05:40.079225 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 00:05:40.079235 | orchestrator | Sunday 23 March 2025 00:05:07 +0000 (0:00:06.541) 0:02:31.378 ********** 2025-03-23 00:05:40.079245 | orchestrator | 2025-03-23 00:05:40.079255 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 00:05:40.079265 | orchestrator | Sunday 23 March 2025 00:05:07 +0000 (0:00:00.232) 0:02:31.610 ********** 2025-03-23 00:05:40.079275 | orchestrator | 2025-03-23 00:05:40.079285 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-23 00:05:40.079295 | orchestrator | Sunday 23 March 2025 00:05:08 +0000 (0:00:00.061) 0:02:31.671 ********** 2025-03-23 00:05:40.079306 | orchestrator | 2025-03-23 00:05:40.079316 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-23 00:05:40.079326 | orchestrator | Sunday 23 March 2025 00:05:08 +0000 (0:00:00.080) 0:02:31.751 ********** 2025-03-23 00:05:40.079336 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.079346 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.079356 | orchestrator | 2025-03-23 00:05:40.079371 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-23 00:05:40.079382 | orchestrator | Sunday 23 March 2025 00:05:15 +0000 (0:00:07.422) 0:02:39.174 ********** 2025-03-23 00:05:40.079392 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.079402 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.079412 | orchestrator | 2025-03-23 00:05:40.079423 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-23 00:05:40.079433 | orchestrator | Sunday 23 March 2025 00:05:22 +0000 (0:00:07.315) 0:02:46.489 ********** 2025-03-23 00:05:40.079443 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:05:40.079453 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:05:40.079463 | orchestrator | 2025-03-23 00:05:40.079473 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-23 00:05:40.079483 | orchestrator | Sunday 23 March 2025 00:05:29 +0000 (0:00:07.052) 0:02:53.541 ********** 2025-03-23 00:05:40.079493 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:05:40.079503 | orchestrator | 2025-03-23 00:05:40.079514 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-23 00:05:40.079524 | orchestrator | Sunday 23 March 2025 00:05:30 +0000 (0:00:00.157) 0:02:53.698 ********** 2025-03-23 00:05:40.079534 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.079544 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.079555 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.079565 | orchestrator | 2025-03-23 00:05:40.079575 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-23 00:05:40.079585 | orchestrator | Sunday 23 March 2025 00:05:31 +0000 (0:00:01.151) 0:02:54.850 ********** 2025-03-23 00:05:40.079595 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.079606 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.079616 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.079626 | orchestrator | 2025-03-23 00:05:40.079636 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-23 00:05:40.079664 | orchestrator | Sunday 23 March 2025 00:05:32 +0000 (0:00:00.845) 0:02:55.696 ********** 2025-03-23 00:05:40.079675 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.079685 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.079695 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.079705 | orchestrator | 2025-03-23 00:05:40.079720 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-23 00:05:40.079730 | orchestrator | Sunday 23 March 2025 00:05:33 +0000 (0:00:01.629) 0:02:57.325 ********** 2025-03-23 00:05:40.079740 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:05:40.079751 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:05:40.079761 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:05:40.079777 | orchestrator | 2025-03-23 00:05:40.079787 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-23 00:05:40.079797 | orchestrator | Sunday 23 March 2025 00:05:34 +0000 (0:00:00.877) 0:02:58.203 ********** 2025-03-23 00:05:40.079807 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.079818 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.079828 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.079838 | orchestrator | 2025-03-23 00:05:40.079848 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-23 00:05:40.079859 | orchestrator | Sunday 23 March 2025 00:05:35 +0000 (0:00:00.833) 0:02:59.036 ********** 2025-03-23 00:05:40.079869 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:05:40.079879 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:05:40.079889 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:05:40.079899 | orchestrator | 2025-03-23 00:05:40.079909 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:05:40.079920 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-23 00:05:40.079930 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-23 00:05:40.079941 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-23 00:05:40.079951 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:05:40.079962 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:05:40.079972 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-23 00:05:40.079987 | orchestrator | 2025-03-23 00:05:40.079997 | orchestrator | 2025-03-23 00:05:40.080007 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:05:40.080018 | orchestrator | Sunday 23 March 2025 00:05:36 +0000 (0:00:01.475) 0:03:00.512 ********** 2025-03-23 00:05:40.080028 | orchestrator | =============================================================================== 2025-03-23 00:05:40.080038 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.19s 2025-03-23 00:05:40.080048 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.96s 2025-03-23 00:05:40.080058 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.36s 2025-03-23 00:05:40.080069 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.02s 2025-03-23 00:05:40.080079 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 10.95s 2025-03-23 00:05:40.080089 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 6.54s 2025-03-23 00:05:40.080099 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.94s 2025-03-23 00:05:40.080114 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.50s 2025-03-23 00:05:43.119726 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 4.52s 2025-03-23 00:05:43.119845 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.86s 2025-03-23 00:05:43.119865 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.34s 2025-03-23 00:05:43.119880 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.05s 2025-03-23 00:05:43.119895 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.90s 2025-03-23 00:05:43.119910 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.67s 2025-03-23 00:05:43.119924 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.66s 2025-03-23 00:05:43.119964 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.64s 2025-03-23 00:05:43.119979 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.39s 2025-03-23 00:05:43.119993 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.13s 2025-03-23 00:05:43.120007 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.75s 2025-03-23 00:05:43.120021 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.66s 2025-03-23 00:05:43.120036 | orchestrator | 2025-03-23 00:05:40 | INFO  | Task 81fb58fb-c6ed-4ad1-a111-40e38ebb7882 is in state SUCCESS 2025-03-23 00:05:43.120052 | orchestrator | 2025-03-23 00:05:40 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:43.120066 | orchestrator | 2025-03-23 00:05:40 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:43.120080 | orchestrator | 2025-03-23 00:05:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:43.120111 | orchestrator | 2025-03-23 00:05:43 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:43.121618 | orchestrator | 2025-03-23 00:05:43 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:43.122252 | orchestrator | 2025-03-23 00:05:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:46.171256 | orchestrator | 2025-03-23 00:05:46 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:46.172697 | orchestrator | 2025-03-23 00:05:46 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:49.220819 | orchestrator | 2025-03-23 00:05:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:49.220933 | orchestrator | 2025-03-23 00:05:49 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:49.221393 | orchestrator | 2025-03-23 00:05:49 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:52.267974 | orchestrator | 2025-03-23 00:05:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:52.268071 | orchestrator | 2025-03-23 00:05:52 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:52.269707 | orchestrator | 2025-03-23 00:05:52 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:55.312684 | orchestrator | 2025-03-23 00:05:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:55.312836 | orchestrator | 2025-03-23 00:05:55 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:05:58.360718 | orchestrator | 2025-03-23 00:05:55 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:05:58.360778 | orchestrator | 2025-03-23 00:05:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:05:58.360806 | orchestrator | 2025-03-23 00:05:58 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:01.409252 | orchestrator | 2025-03-23 00:05:58 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:01.409367 | orchestrator | 2025-03-23 00:05:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:01.409510 | orchestrator | 2025-03-23 00:06:01 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:04.467982 | orchestrator | 2025-03-23 00:06:01 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:04.468093 | orchestrator | 2025-03-23 00:06:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:04.468157 | orchestrator | 2025-03-23 00:06:04 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:07.507176 | orchestrator | 2025-03-23 00:06:04 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:07.507298 | orchestrator | 2025-03-23 00:06:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:07.507337 | orchestrator | 2025-03-23 00:06:07 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:10.546603 | orchestrator | 2025-03-23 00:06:07 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:10.546701 | orchestrator | 2025-03-23 00:06:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:10.546722 | orchestrator | 2025-03-23 00:06:10 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:10.549805 | orchestrator | 2025-03-23 00:06:10 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:13.596205 | orchestrator | 2025-03-23 00:06:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:13.596345 | orchestrator | 2025-03-23 00:06:13 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:13.596928 | orchestrator | 2025-03-23 00:06:13 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:16.645156 | orchestrator | 2025-03-23 00:06:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:16.645303 | orchestrator | 2025-03-23 00:06:16 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:19.703142 | orchestrator | 2025-03-23 00:06:16 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:19.703261 | orchestrator | 2025-03-23 00:06:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:19.703297 | orchestrator | 2025-03-23 00:06:19 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:19.703801 | orchestrator | 2025-03-23 00:06:19 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:19.705794 | orchestrator | 2025-03-23 00:06:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:22.760828 | orchestrator | 2025-03-23 00:06:22 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:22.764541 | orchestrator | 2025-03-23 00:06:22 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:25.816081 | orchestrator | 2025-03-23 00:06:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:25.816225 | orchestrator | 2025-03-23 00:06:25 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:25.817499 | orchestrator | 2025-03-23 00:06:25 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:28.869433 | orchestrator | 2025-03-23 00:06:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:28.869555 | orchestrator | 2025-03-23 00:06:28 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:28.870850 | orchestrator | 2025-03-23 00:06:28 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:31.934201 | orchestrator | 2025-03-23 00:06:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:31.934332 | orchestrator | 2025-03-23 00:06:31 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:34.980153 | orchestrator | 2025-03-23 00:06:31 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:34.980293 | orchestrator | 2025-03-23 00:06:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:34.980328 | orchestrator | 2025-03-23 00:06:34 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:34.982960 | orchestrator | 2025-03-23 00:06:34 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:38.039209 | orchestrator | 2025-03-23 00:06:34 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:38.039350 | orchestrator | 2025-03-23 00:06:38 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:41.085464 | orchestrator | 2025-03-23 00:06:38 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:41.085561 | orchestrator | 2025-03-23 00:06:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:41.085591 | orchestrator | 2025-03-23 00:06:41 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:41.086472 | orchestrator | 2025-03-23 00:06:41 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:44.144311 | orchestrator | 2025-03-23 00:06:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:44.144449 | orchestrator | 2025-03-23 00:06:44 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:44.147075 | orchestrator | 2025-03-23 00:06:44 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:47.200501 | orchestrator | 2025-03-23 00:06:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:47.200692 | orchestrator | 2025-03-23 00:06:47 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:50.250749 | orchestrator | 2025-03-23 00:06:47 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:50.250880 | orchestrator | 2025-03-23 00:06:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:50.250918 | orchestrator | 2025-03-23 00:06:50 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:50.251740 | orchestrator | 2025-03-23 00:06:50 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:53.301644 | orchestrator | 2025-03-23 00:06:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:53.301764 | orchestrator | 2025-03-23 00:06:53 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:53.302837 | orchestrator | 2025-03-23 00:06:53 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:53.303257 | orchestrator | 2025-03-23 00:06:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:56.349727 | orchestrator | 2025-03-23 00:06:56 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:06:56.350509 | orchestrator | 2025-03-23 00:06:56 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:06:59.385307 | orchestrator | 2025-03-23 00:06:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:06:59.385521 | orchestrator | 2025-03-23 00:06:59 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:02.433514 | orchestrator | 2025-03-23 00:06:59 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:02.433583 | orchestrator | 2025-03-23 00:06:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:02.433611 | orchestrator | 2025-03-23 00:07:02 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:02.433790 | orchestrator | 2025-03-23 00:07:02 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:05.482929 | orchestrator | 2025-03-23 00:07:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:05.483054 | orchestrator | 2025-03-23 00:07:05 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:05.486196 | orchestrator | 2025-03-23 00:07:05 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:08.541345 | orchestrator | 2025-03-23 00:07:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:08.541423 | orchestrator | 2025-03-23 00:07:08 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:08.541956 | orchestrator | 2025-03-23 00:07:08 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:11.587682 | orchestrator | 2025-03-23 00:07:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:11.587799 | orchestrator | 2025-03-23 00:07:11 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:14.631926 | orchestrator | 2025-03-23 00:07:11 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:14.632054 | orchestrator | 2025-03-23 00:07:11 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:14.632094 | orchestrator | 2025-03-23 00:07:14 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:14.633902 | orchestrator | 2025-03-23 00:07:14 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:17.672531 | orchestrator | 2025-03-23 00:07:14 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:17.672719 | orchestrator | 2025-03-23 00:07:17 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:20.719273 | orchestrator | 2025-03-23 00:07:17 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:20.719395 | orchestrator | 2025-03-23 00:07:17 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:20.719432 | orchestrator | 2025-03-23 00:07:20 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:20.720393 | orchestrator | 2025-03-23 00:07:20 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:23.766708 | orchestrator | 2025-03-23 00:07:20 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:23.766845 | orchestrator | 2025-03-23 00:07:23 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:26.812041 | orchestrator | 2025-03-23 00:07:23 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:26.812154 | orchestrator | 2025-03-23 00:07:23 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:26.812186 | orchestrator | 2025-03-23 00:07:26 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:26.815475 | orchestrator | 2025-03-23 00:07:26 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:29.865424 | orchestrator | 2025-03-23 00:07:26 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:29.865558 | orchestrator | 2025-03-23 00:07:29 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:32.903299 | orchestrator | 2025-03-23 00:07:29 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:32.903405 | orchestrator | 2025-03-23 00:07:29 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:32.903476 | orchestrator | 2025-03-23 00:07:32 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:32.905861 | orchestrator | 2025-03-23 00:07:32 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:32.906401 | orchestrator | 2025-03-23 00:07:32 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:35.963467 | orchestrator | 2025-03-23 00:07:35 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:35.969104 | orchestrator | 2025-03-23 00:07:35 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:39.013901 | orchestrator | 2025-03-23 00:07:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:39.014061 | orchestrator | 2025-03-23 00:07:39 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:39.014780 | orchestrator | 2025-03-23 00:07:39 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:42.065433 | orchestrator | 2025-03-23 00:07:39 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:42.065496 | orchestrator | 2025-03-23 00:07:42 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:42.067242 | orchestrator | 2025-03-23 00:07:42 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:45.122264 | orchestrator | 2025-03-23 00:07:42 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:45.122401 | orchestrator | 2025-03-23 00:07:45 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:45.122570 | orchestrator | 2025-03-23 00:07:45 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:48.184332 | orchestrator | 2025-03-23 00:07:45 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:48.184466 | orchestrator | 2025-03-23 00:07:48 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:51.247080 | orchestrator | 2025-03-23 00:07:48 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:51.247176 | orchestrator | 2025-03-23 00:07:48 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:51.247203 | orchestrator | 2025-03-23 00:07:51 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:51.247956 | orchestrator | 2025-03-23 00:07:51 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:51.248067 | orchestrator | 2025-03-23 00:07:51 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:54.303297 | orchestrator | 2025-03-23 00:07:54 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:54.304136 | orchestrator | 2025-03-23 00:07:54 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:07:57.361595 | orchestrator | 2025-03-23 00:07:54 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:07:57.361781 | orchestrator | 2025-03-23 00:07:57 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:07:57.363871 | orchestrator | 2025-03-23 00:07:57 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:00.412414 | orchestrator | 2025-03-23 00:07:57 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:00.412545 | orchestrator | 2025-03-23 00:08:00 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:00.413337 | orchestrator | 2025-03-23 00:08:00 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:03.463011 | orchestrator | 2025-03-23 00:08:00 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:03.463151 | orchestrator | 2025-03-23 00:08:03 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:03.463363 | orchestrator | 2025-03-23 00:08:03 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:03.463920 | orchestrator | 2025-03-23 00:08:03 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:06.520881 | orchestrator | 2025-03-23 00:08:06 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:09.565015 | orchestrator | 2025-03-23 00:08:06 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:09.565124 | orchestrator | 2025-03-23 00:08:06 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:09.565269 | orchestrator | 2025-03-23 00:08:09 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:12.611493 | orchestrator | 2025-03-23 00:08:09 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:12.611629 | orchestrator | 2025-03-23 00:08:09 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:12.611666 | orchestrator | 2025-03-23 00:08:12 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:12.611864 | orchestrator | 2025-03-23 00:08:12 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:12.612850 | orchestrator | 2025-03-23 00:08:12 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:15.652289 | orchestrator | 2025-03-23 00:08:15 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:15.655227 | orchestrator | 2025-03-23 00:08:15 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:18.710225 | orchestrator | 2025-03-23 00:08:15 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:18.710299 | orchestrator | 2025-03-23 00:08:18 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:18.711221 | orchestrator | 2025-03-23 00:08:18 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:21.756231 | orchestrator | 2025-03-23 00:08:18 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:21.756362 | orchestrator | 2025-03-23 00:08:21 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:21.759135 | orchestrator | 2025-03-23 00:08:21 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:24.809667 | orchestrator | 2025-03-23 00:08:21 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:24.809784 | orchestrator | 2025-03-23 00:08:24 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:24.810689 | orchestrator | 2025-03-23 00:08:24 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:27.853370 | orchestrator | 2025-03-23 00:08:24 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:27.853498 | orchestrator | 2025-03-23 00:08:27 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:27.853886 | orchestrator | 2025-03-23 00:08:27 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:30.913957 | orchestrator | 2025-03-23 00:08:27 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:30.914144 | orchestrator | 2025-03-23 00:08:30 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:30.916701 | orchestrator | 2025-03-23 00:08:30 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:30.917504 | orchestrator | 2025-03-23 00:08:30 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:33.963588 | orchestrator | 2025-03-23 00:08:33 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:33.965149 | orchestrator | 2025-03-23 00:08:33 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:33.966172 | orchestrator | 2025-03-23 00:08:33 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:37.026455 | orchestrator | 2025-03-23 00:08:37 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:37.027943 | orchestrator | 2025-03-23 00:08:37 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:40.072836 | orchestrator | 2025-03-23 00:08:37 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:40.072955 | orchestrator | 2025-03-23 00:08:40 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:40.075034 | orchestrator | 2025-03-23 00:08:40 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:43.115363 | orchestrator | 2025-03-23 00:08:40 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:43.115490 | orchestrator | 2025-03-23 00:08:43 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:43.116081 | orchestrator | 2025-03-23 00:08:43 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:46.166896 | orchestrator | 2025-03-23 00:08:43 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:46.167024 | orchestrator | 2025-03-23 00:08:46 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:46.171080 | orchestrator | 2025-03-23 00:08:46 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:49.216422 | orchestrator | 2025-03-23 00:08:46 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:49.216545 | orchestrator | 2025-03-23 00:08:49 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:49.217715 | orchestrator | 2025-03-23 00:08:49 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:52.269011 | orchestrator | 2025-03-23 00:08:49 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:52.269200 | orchestrator | 2025-03-23 00:08:52 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:55.310864 | orchestrator | 2025-03-23 00:08:52 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:55.310959 | orchestrator | 2025-03-23 00:08:52 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:55.310993 | orchestrator | 2025-03-23 00:08:55 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:55.312676 | orchestrator | 2025-03-23 00:08:55 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:58.361441 | orchestrator | 2025-03-23 00:08:55 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:08:58.361538 | orchestrator | 2025-03-23 00:08:58 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:08:58.363695 | orchestrator | 2025-03-23 00:08:58 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:08:58.363923 | orchestrator | 2025-03-23 00:08:58 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:01.412631 | orchestrator | 2025-03-23 00:09:01 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:01.415865 | orchestrator | 2025-03-23 00:09:01 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:04.491122 | orchestrator | 2025-03-23 00:09:01 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:04.491214 | orchestrator | 2025-03-23 00:09:04 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:04.491763 | orchestrator | 2025-03-23 00:09:04 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:07.547158 | orchestrator | 2025-03-23 00:09:04 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:07.547288 | orchestrator | 2025-03-23 00:09:07 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:07.548730 | orchestrator | 2025-03-23 00:09:07 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:10.610072 | orchestrator | 2025-03-23 00:09:07 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:10.610211 | orchestrator | 2025-03-23 00:09:10 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:10.611577 | orchestrator | 2025-03-23 00:09:10 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:13.652686 | orchestrator | 2025-03-23 00:09:10 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:13.652822 | orchestrator | 2025-03-23 00:09:13 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:13.653202 | orchestrator | 2025-03-23 00:09:13 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:13.653232 | orchestrator | 2025-03-23 00:09:13 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:16.701383 | orchestrator | 2025-03-23 00:09:16 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:16.702923 | orchestrator | 2025-03-23 00:09:16 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:19.763738 | orchestrator | 2025-03-23 00:09:16 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:19.763870 | orchestrator | 2025-03-23 00:09:19 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:19.765059 | orchestrator | 2025-03-23 00:09:19 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:19.765295 | orchestrator | 2025-03-23 00:09:19 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:22.826192 | orchestrator | 2025-03-23 00:09:22 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:22.827860 | orchestrator | 2025-03-23 00:09:22 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:22.828308 | orchestrator | 2025-03-23 00:09:22 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:25.881096 | orchestrator | 2025-03-23 00:09:25 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:25.882746 | orchestrator | 2025-03-23 00:09:25 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:25.882836 | orchestrator | 2025-03-23 00:09:25 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:28.946918 | orchestrator | 2025-03-23 00:09:28 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:31.990925 | orchestrator | 2025-03-23 00:09:28 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:31.991160 | orchestrator | 2025-03-23 00:09:28 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:31.991199 | orchestrator | 2025-03-23 00:09:31 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:31.994060 | orchestrator | 2025-03-23 00:09:31 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:31.994098 | orchestrator | 2025-03-23 00:09:31 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:35.039995 | orchestrator | 2025-03-23 00:09:35 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:35.042963 | orchestrator | 2025-03-23 00:09:35 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:38.095798 | orchestrator | 2025-03-23 00:09:35 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:38.095936 | orchestrator | 2025-03-23 00:09:38 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:38.097739 | orchestrator | 2025-03-23 00:09:38 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:41.153227 | orchestrator | 2025-03-23 00:09:38 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:41.153361 | orchestrator | 2025-03-23 00:09:41 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state STARTED 2025-03-23 00:09:41.153821 | orchestrator | 2025-03-23 00:09:41 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:44.213257 | orchestrator | 2025-03-23 00:09:41 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:44.213377 | orchestrator | 2025-03-23 00:09:44 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:09:44.221193 | orchestrator | 2025-03-23 00:09:44 | INFO  | Task 54b20615-e8c9-471b-8ee6-a6ef2aa81fb9 is in state SUCCESS 2025-03-23 00:09:44.221646 | orchestrator | 2025-03-23 00:09:44.224071 | orchestrator | 2025-03-23 00:09:44.224333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-23 00:09:44.224351 | orchestrator | 2025-03-23 00:09:44.224366 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-23 00:09:44.224380 | orchestrator | Sunday 23 March 2025 00:00:48 +0000 (0:00:00.493) 0:00:00.493 ********** 2025-03-23 00:09:44.224401 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.224416 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.224510 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.224527 | orchestrator | 2025-03-23 00:09:44.224540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-23 00:09:44.224553 | orchestrator | Sunday 23 March 2025 00:00:49 +0000 (0:00:00.518) 0:00:01.011 ********** 2025-03-23 00:09:44.224567 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-03-23 00:09:44.224601 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-03-23 00:09:44.224615 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-03-23 00:09:44.224628 | orchestrator | 2025-03-23 00:09:44.224641 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-03-23 00:09:44.224654 | orchestrator | 2025-03-23 00:09:44.224877 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-23 00:09:44.224896 | orchestrator | Sunday 23 March 2025 00:00:49 +0000 (0:00:00.601) 0:00:01.613 ********** 2025-03-23 00:09:44.224910 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.224923 | orchestrator | 2025-03-23 00:09:44.224936 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-03-23 00:09:44.224998 | orchestrator | Sunday 23 March 2025 00:00:51 +0000 (0:00:01.145) 0:00:02.758 ********** 2025-03-23 00:09:44.225012 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.225044 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.225057 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.225139 | orchestrator | 2025-03-23 00:09:44.225153 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-23 00:09:44.225166 | orchestrator | Sunday 23 March 2025 00:00:52 +0000 (0:00:01.345) 0:00:04.103 ********** 2025-03-23 00:09:44.225178 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.225191 | orchestrator | 2025-03-23 00:09:44.225203 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-03-23 00:09:44.225216 | orchestrator | Sunday 23 March 2025 00:00:53 +0000 (0:00:01.410) 0:00:05.513 ********** 2025-03-23 00:09:44.225228 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.225241 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.225253 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.225266 | orchestrator | 2025-03-23 00:09:44.225279 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-03-23 00:09:44.225291 | orchestrator | Sunday 23 March 2025 00:00:55 +0000 (0:00:01.334) 0:00:06.847 ********** 2025-03-23 00:09:44.225304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-23 00:09:44.225316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-23 00:09:44.225399 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-23 00:09:44.225422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-23 00:09:44.225435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-23 00:09:44.225448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-23 00:09:44.225460 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-23 00:09:44.225474 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-23 00:09:44.225487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-23 00:09:44.225500 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-23 00:09:44.225512 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-23 00:09:44.225527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-23 00:09:44.225542 | orchestrator | 2025-03-23 00:09:44.225556 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-23 00:09:44.225570 | orchestrator | Sunday 23 March 2025 00:00:59 +0000 (0:00:04.759) 0:00:11.607 ********** 2025-03-23 00:09:44.226722 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-23 00:09:44.226756 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-23 00:09:44.226767 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-23 00:09:44.226777 | orchestrator | 2025-03-23 00:09:44.226787 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-23 00:09:44.226798 | orchestrator | Sunday 23 March 2025 00:01:01 +0000 (0:00:01.178) 0:00:12.785 ********** 2025-03-23 00:09:44.226808 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-23 00:09:44.226819 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-23 00:09:44.227350 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-23 00:09:44.227363 | orchestrator | 2025-03-23 00:09:44.227374 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-23 00:09:44.227385 | orchestrator | Sunday 23 March 2025 00:01:03 +0000 (0:00:02.252) 0:00:15.037 ********** 2025-03-23 00:09:44.227396 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-03-23 00:09:44.227406 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.227509 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-03-23 00:09:44.227527 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.227538 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-03-23 00:09:44.227548 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.227558 | orchestrator | 2025-03-23 00:09:44.227569 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-03-23 00:09:44.227579 | orchestrator | Sunday 23 March 2025 00:01:04 +0000 (0:00:00.890) 0:00:15.928 ********** 2025-03-23 00:09:44.227647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.227664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.227676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.227686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.227698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.227763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.227785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.227795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.227806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.227815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.227824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.227833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.227846 | orchestrator | 2025-03-23 00:09:44.228086 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-03-23 00:09:44.228106 | orchestrator | Sunday 23 March 2025 00:01:07 +0000 (0:00:03.476) 0:00:19.404 ********** 2025-03-23 00:09:44.228115 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.228125 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.228133 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.228142 | orchestrator | 2025-03-23 00:09:44.228151 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-03-23 00:09:44.228160 | orchestrator | Sunday 23 March 2025 00:01:09 +0000 (0:00:02.238) 0:00:21.643 ********** 2025-03-23 00:09:44.228216 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-03-23 00:09:44.228229 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-03-23 00:09:44.228238 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-03-23 00:09:44.228247 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-03-23 00:09:44.228255 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-03-23 00:09:44.228264 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-03-23 00:09:44.228273 | orchestrator | 2025-03-23 00:09:44.228282 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-03-23 00:09:44.228291 | orchestrator | Sunday 23 March 2025 00:01:16 +0000 (0:00:07.019) 0:00:28.662 ********** 2025-03-23 00:09:44.228299 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.228356 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.228367 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.228451 | orchestrator | 2025-03-23 00:09:44.228464 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-03-23 00:09:44.228473 | orchestrator | Sunday 23 March 2025 00:01:21 +0000 (0:00:04.208) 0:00:32.870 ********** 2025-03-23 00:09:44.228482 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.229123 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.229154 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.229164 | orchestrator | 2025-03-23 00:09:44.229173 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-03-23 00:09:44.229185 | orchestrator | Sunday 23 March 2025 00:01:26 +0000 (0:00:05.101) 0:00:37.972 ********** 2025-03-23 00:09:44.229196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.229206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.229216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.229233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.229243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.229823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.229853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.229863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.229872 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.229882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.229901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.229910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.229919 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.231078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231157 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.231168 | orchestrator | 2025-03-23 00:09:44.231177 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-03-23 00:09:44.231185 | orchestrator | Sunday 23 March 2025 00:01:29 +0000 (0:00:03.253) 0:00:41.225 ********** 2025-03-23 00:09:44.231194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.231253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.231293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.231301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231318 | orchestrator | 2025-03-23 00:09:44.231326 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-03-23 00:09:44.231334 | orchestrator | Sunday 23 March 2025 00:01:36 +0000 (0:00:06.834) 0:00:48.060 ********** 2025-03-23 00:09:44.231348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.231413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.231422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.231486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.231509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.231517 | orchestrator | 2025-03-23 00:09:44.231535 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-03-23 00:09:44.231564 | orchestrator | Sunday 23 March 2025 00:01:40 +0000 (0:00:04.064) 0:00:52.124 ********** 2025-03-23 00:09:44.231573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-23 00:09:44.231599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-23 00:09:44.231608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-23 00:09:44.231616 | orchestrator | 2025-03-23 00:09:44.231624 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-03-23 00:09:44.231632 | orchestrator | Sunday 23 March 2025 00:01:43 +0000 (0:00:03.496) 0:00:55.621 ********** 2025-03-23 00:09:44.231663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-23 00:09:44.231673 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-23 00:09:44.231686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-23 00:09:44.231694 | orchestrator | 2025-03-23 00:09:44.231703 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-03-23 00:09:44.231711 | orchestrator | Sunday 23 March 2025 00:01:51 +0000 (0:00:07.782) 0:01:03.403 ********** 2025-03-23 00:09:44.231720 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.231730 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.231739 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.231748 | orchestrator | 2025-03-23 00:09:44.231757 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-03-23 00:09:44.231766 | orchestrator | Sunday 23 March 2025 00:01:53 +0000 (0:00:01.449) 0:01:04.853 ********** 2025-03-23 00:09:44.231775 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-23 00:09:44.231807 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-23 00:09:44.231821 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-23 00:09:44.231830 | orchestrator | 2025-03-23 00:09:44.231840 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-03-23 00:09:44.231849 | orchestrator | Sunday 23 March 2025 00:01:56 +0000 (0:00:03.380) 0:01:08.234 ********** 2025-03-23 00:09:44.231858 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-23 00:09:44.231868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-23 00:09:44.231877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-23 00:09:44.231886 | orchestrator | 2025-03-23 00:09:44.231895 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-03-23 00:09:44.231904 | orchestrator | Sunday 23 March 2025 00:02:01 +0000 (0:00:05.265) 0:01:13.499 ********** 2025-03-23 00:09:44.231932 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-03-23 00:09:44.231942 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-03-23 00:09:44.231951 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-03-23 00:09:44.231960 | orchestrator | 2025-03-23 00:09:44.231970 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-03-23 00:09:44.231979 | orchestrator | Sunday 23 March 2025 00:02:05 +0000 (0:00:03.839) 0:01:17.338 ********** 2025-03-23 00:09:44.232012 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-03-23 00:09:44.232022 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-03-23 00:09:44.232031 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-03-23 00:09:44.232041 | orchestrator | 2025-03-23 00:09:44.232050 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-23 00:09:44.232072 | orchestrator | Sunday 23 March 2025 00:02:10 +0000 (0:00:04.797) 0:01:22.136 ********** 2025-03-23 00:09:44.232080 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.232088 | orchestrator | 2025-03-23 00:09:44.232096 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-03-23 00:09:44.232104 | orchestrator | Sunday 23 March 2025 00:02:11 +0000 (0:00:00.994) 0:01:23.131 ********** 2025-03-23 00:09:44.232113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.232186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.232194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.232208 | orchestrator | 2025-03-23 00:09:44.232216 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-03-23 00:09:44.232224 | orchestrator | Sunday 23 March 2025 00:02:15 +0000 (0:00:04.134) 0:01:27.265 ********** 2025-03-23 00:09:44.232239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.232269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.232278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.232286 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.232295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.232303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.232311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.232324 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.232340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.232349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.232357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.232365 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.232374 | orchestrator | 2025-03-23 00:09:44.232382 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-03-23 00:09:44.232390 | orchestrator | Sunday 23 March 2025 00:02:16 +0000 (0:00:01.456) 0:01:28.722 ********** 2025-03-23 00:09:44.232398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.232407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.232415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.232443 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.232452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.232473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.232521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.232530 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.232538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-23 00:09:44.232547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-23 00:09:44.232555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-23 00:09:44.232563 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.232571 | orchestrator | 2025-03-23 00:09:44.232622 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-03-23 00:09:44.232637 | orchestrator | Sunday 23 March 2025 00:02:20 +0000 (0:00:03.975) 0:01:32.697 ********** 2025-03-23 00:09:44.232645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-23 00:09:44.232653 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-23 00:09:44.232661 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-23 00:09:44.232670 | orchestrator | 2025-03-23 00:09:44.232678 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-03-23 00:09:44.232686 | orchestrator | Sunday 23 March 2025 00:02:26 +0000 (0:00:05.285) 0:01:37.983 ********** 2025-03-23 00:09:44.232694 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-23 00:09:44.232702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-23 00:09:44.232710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-23 00:09:44.232718 | orchestrator | 2025-03-23 00:09:44.232726 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-03-23 00:09:44.232737 | orchestrator | Sunday 23 March 2025 00:02:28 +0000 (0:00:02.121) 0:01:40.104 ********** 2025-03-23 00:09:44.232746 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 00:09:44.232759 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 00:09:44.232798 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-23 00:09:44.232807 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 00:09:44.232815 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 00:09:44.232823 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.232831 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.232840 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-23 00:09:44.232848 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.232856 | orchestrator | 2025-03-23 00:09:44.232864 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-03-23 00:09:44.232872 | orchestrator | Sunday 23 March 2025 00:02:31 +0000 (0:00:02.905) 0:01:43.010 ********** 2025-03-23 00:09:44.232880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-23 00:09:44.232942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.232990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.233002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.233014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.233023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-23 00:09:44.233031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e', '__omit_place_holder__5baf8ccaf19abf975c04dda173dced2773340a8e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-23 00:09:44.233065 | orchestrator | 2025-03-23 00:09:44.233078 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-03-23 00:09:44.233087 | orchestrator | Sunday 23 March 2025 00:02:35 +0000 (0:00:04.422) 0:01:47.433 ********** 2025-03-23 00:09:44.233095 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.233128 | orchestrator | 2025-03-23 00:09:44.233137 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-03-23 00:09:44.233146 | orchestrator | Sunday 23 March 2025 00:02:36 +0000 (0:00:01.241) 0:01:48.674 ********** 2025-03-23 00:09:44.233155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-23 00:09:44.233164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.233197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-23 00:09:44.233226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.233234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-23 00:09:44.233264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.233271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233286 | orchestrator | 2025-03-23 00:09:44.233293 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-03-23 00:09:44.233300 | orchestrator | Sunday 23 March 2025 00:02:44 +0000 (0:00:07.438) 0:01:56.113 ********** 2025-03-23 00:09:44.233316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-23 00:09:44.233324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.233335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-23 00:09:44.233350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.233358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233376 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.233384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233394 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.233406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-23 00:09:44.233414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.233421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233435 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.233442 | orchestrator | 2025-03-23 00:09:44.233450 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-03-23 00:09:44.233457 | orchestrator | Sunday 23 March 2025 00:02:46 +0000 (0:00:01.770) 0:01:57.883 ********** 2025-03-23 00:09:44.233464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-23 00:09:44.233475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-23 00:09:44.233483 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.233491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-23 00:09:44.233498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-23 00:09:44.233508 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.233516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-23 00:09:44.233523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-23 00:09:44.233530 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.233537 | orchestrator | 2025-03-23 00:09:44.233544 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-03-23 00:09:44.233551 | orchestrator | Sunday 23 March 2025 00:02:49 +0000 (0:00:03.650) 0:02:01.534 ********** 2025-03-23 00:09:44.233558 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.233565 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.233572 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.233592 | orchestrator | 2025-03-23 00:09:44.233600 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-03-23 00:09:44.233607 | orchestrator | Sunday 23 March 2025 00:02:52 +0000 (0:00:02.450) 0:02:03.984 ********** 2025-03-23 00:09:44.233614 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.233621 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.233628 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.233635 | orchestrator | 2025-03-23 00:09:44.233642 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-03-23 00:09:44.233649 | orchestrator | Sunday 23 March 2025 00:02:56 +0000 (0:00:04.314) 0:02:08.298 ********** 2025-03-23 00:09:44.233656 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.233663 | orchestrator | 2025-03-23 00:09:44.233670 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-03-23 00:09:44.233676 | orchestrator | Sunday 23 March 2025 00:02:58 +0000 (0:00:01.656) 0:02:09.955 ********** 2025-03-23 00:09:44.233684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.233699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.233711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.234345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.234403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234475 | orchestrator | 2025-03-23 00:09:44.234491 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-03-23 00:09:44.234506 | orchestrator | Sunday 23 March 2025 00:03:05 +0000 (0:00:07.365) 0:02:17.320 ********** 2025-03-23 00:09:44.234534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.234550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234607 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.234624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.234660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234691 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.234716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.234732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.234768 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.234783 | orchestrator | 2025-03-23 00:09:44.234798 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-03-23 00:09:44.234813 | orchestrator | Sunday 23 March 2025 00:03:06 +0000 (0:00:01.231) 0:02:18.551 ********** 2025-03-23 00:09:44.234828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 00:09:44.234850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 00:09:44.234866 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.234881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 00:09:44.234901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 00:09:44.234916 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.234930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 00:09:44.234944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-23 00:09:44.234959 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.234973 | orchestrator | 2025-03-23 00:09:44.234987 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-03-23 00:09:44.235001 | orchestrator | Sunday 23 March 2025 00:03:08 +0000 (0:00:01.647) 0:02:20.199 ********** 2025-03-23 00:09:44.235015 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.235029 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.235043 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.235057 | orchestrator | 2025-03-23 00:09:44.235071 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-03-23 00:09:44.235085 | orchestrator | Sunday 23 March 2025 00:03:09 +0000 (0:00:01.328) 0:02:21.527 ********** 2025-03-23 00:09:44.235099 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.235113 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.235127 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.235141 | orchestrator | 2025-03-23 00:09:44.235155 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-03-23 00:09:44.235169 | orchestrator | Sunday 23 March 2025 00:03:11 +0000 (0:00:01.949) 0:02:23.477 ********** 2025-03-23 00:09:44.235183 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.235197 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.235212 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.235226 | orchestrator | 2025-03-23 00:09:44.235240 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-03-23 00:09:44.235253 | orchestrator | Sunday 23 March 2025 00:03:12 +0000 (0:00:00.416) 0:02:23.893 ********** 2025-03-23 00:09:44.235273 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.235295 | orchestrator | 2025-03-23 00:09:44.235309 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-03-23 00:09:44.235323 | orchestrator | Sunday 23 March 2025 00:03:12 +0000 (0:00:00.791) 0:02:24.685 ********** 2025-03-23 00:09:44.235337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-23 00:09:44.235363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-23 00:09:44.235385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-23 00:09:44.235400 | orchestrator | 2025-03-23 00:09:44.235415 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-03-23 00:09:44.235429 | orchestrator | Sunday 23 March 2025 00:03:16 +0000 (0:00:03.503) 0:02:28.188 ********** 2025-03-23 00:09:44.235444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-23 00:09:44.235459 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.235482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-23 00:09:44.235503 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.235518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-23 00:09:44.235533 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.235547 | orchestrator | 2025-03-23 00:09:44.235561 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-03-23 00:09:44.235575 | orchestrator | Sunday 23 March 2025 00:03:20 +0000 (0:00:03.849) 0:02:32.038 ********** 2025-03-23 00:09:44.235631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 00:09:44.235649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 00:09:44.235665 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.235680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 00:09:44.235695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 00:09:44.235709 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.235723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 00:09:44.235745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-23 00:09:44.235760 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.235780 | orchestrator | 2025-03-23 00:09:44.235794 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-03-23 00:09:44.235809 | orchestrator | Sunday 23 March 2025 00:03:24 +0000 (0:00:03.902) 0:02:35.940 ********** 2025-03-23 00:09:44.235823 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.235837 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.235851 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.235865 | orchestrator | 2025-03-23 00:09:44.235878 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-03-23 00:09:44.235892 | orchestrator | Sunday 23 March 2025 00:03:25 +0000 (0:00:01.484) 0:02:37.425 ********** 2025-03-23 00:09:44.235906 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.235920 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.235934 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.235948 | orchestrator | 2025-03-23 00:09:44.235962 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-03-23 00:09:44.235976 | orchestrator | Sunday 23 March 2025 00:03:28 +0000 (0:00:02.791) 0:02:40.216 ********** 2025-03-23 00:09:44.235990 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.236004 | orchestrator | 2025-03-23 00:09:44.236018 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-03-23 00:09:44.236032 | orchestrator | Sunday 23 March 2025 00:03:29 +0000 (0:00:00.976) 0:02:41.193 ********** 2025-03-23 00:09:44.236053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.236070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.236113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.236241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236286 | orchestrator | 2025-03-23 00:09:44.236306 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-03-23 00:09:44.236321 | orchestrator | Sunday 23 March 2025 00:03:38 +0000 (0:00:08.822) 0:02:50.015 ********** 2025-03-23 00:09:44.236335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.236357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236409 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.236430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.236445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.236506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236521 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.236535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.236624 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.236639 | orchestrator | 2025-03-23 00:09:44.236653 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-03-23 00:09:44.236667 | orchestrator | Sunday 23 March 2025 00:03:40 +0000 (0:00:02.105) 0:02:52.120 ********** 2025-03-23 00:09:44.236681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 00:09:44.236696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 00:09:44.236710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 00:09:44.236724 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.236738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 00:09:44.236753 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.236767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 00:09:44.236781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-23 00:09:44.236796 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.236810 | orchestrator | 2025-03-23 00:09:44.236824 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-03-23 00:09:44.236838 | orchestrator | Sunday 23 March 2025 00:03:43 +0000 (0:00:02.760) 0:02:54.881 ********** 2025-03-23 00:09:44.236852 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.236866 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.236880 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.236893 | orchestrator | 2025-03-23 00:09:44.236907 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-03-23 00:09:44.236922 | orchestrator | Sunday 23 March 2025 00:03:45 +0000 (0:00:01.877) 0:02:56.758 ********** 2025-03-23 00:09:44.236935 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.236950 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.236963 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.236977 | orchestrator | 2025-03-23 00:09:44.236991 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-03-23 00:09:44.237012 | orchestrator | Sunday 23 March 2025 00:03:47 +0000 (0:00:02.458) 0:02:59.216 ********** 2025-03-23 00:09:44.237026 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.237040 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.237054 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.237068 | orchestrator | 2025-03-23 00:09:44.237082 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-03-23 00:09:44.237096 | orchestrator | Sunday 23 March 2025 00:03:47 +0000 (0:00:00.384) 0:02:59.601 ********** 2025-03-23 00:09:44.237110 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.237124 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.237138 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.237152 | orchestrator | 2025-03-23 00:09:44.237165 | orchestrator | TASK [include_role : designate] ************************************************ 2025-03-23 00:09:44.237179 | orchestrator | Sunday 23 March 2025 00:03:48 +0000 (0:00:00.396) 0:02:59.997 ********** 2025-03-23 00:09:44.237199 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.237214 | orchestrator | 2025-03-23 00:09:44.237228 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-03-23 00:09:44.237242 | orchestrator | Sunday 23 March 2025 00:03:49 +0000 (0:00:00.975) 0:03:00.972 ********** 2025-03-23 00:09:44.237257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 00:09:44.237272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 00:09:44.237288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 00:09:44.237405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 00:09:44.237420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-23 00:09:44.237441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 00:09:44.237505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237704 | orchestrator | 2025-03-23 00:09:44.237719 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-03-23 00:09:44.237733 | orchestrator | Sunday 23 March 2025 00:03:56 +0000 (0:00:06.891) 0:03:07.864 ********** 2025-03-23 00:09:44.237748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 00:09:44.237771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 00:09:44.237797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237878 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.237892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 00:09:44.237923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 00:09:44.237938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.237990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238157 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.238188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-23 00:09:44.238204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-23 00:09:44.238245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.238331 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.238345 | orchestrator | 2025-03-23 00:09:44.238360 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-03-23 00:09:44.238374 | orchestrator | Sunday 23 March 2025 00:03:58 +0000 (0:00:02.459) 0:03:10.323 ********** 2025-03-23 00:09:44.238389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-23 00:09:44.238404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-23 00:09:44.238419 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.238433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-23 00:09:44.238447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-23 00:09:44.238461 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.238476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-23 00:09:44.238490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-23 00:09:44.238504 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.238518 | orchestrator | 2025-03-23 00:09:44.238532 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-03-23 00:09:44.238570 | orchestrator | Sunday 23 March 2025 00:04:01 +0000 (0:00:02.665) 0:03:12.989 ********** 2025-03-23 00:09:44.238647 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.238664 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.238679 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.238693 | orchestrator | 2025-03-23 00:09:44.238707 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-03-23 00:09:44.238721 | orchestrator | Sunday 23 March 2025 00:04:02 +0000 (0:00:01.534) 0:03:14.523 ********** 2025-03-23 00:09:44.238735 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.238749 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.238763 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.238777 | orchestrator | 2025-03-23 00:09:44.238792 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-03-23 00:09:44.238806 | orchestrator | Sunday 23 March 2025 00:04:05 +0000 (0:00:02.612) 0:03:17.135 ********** 2025-03-23 00:09:44.238829 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.238843 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.238857 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.238871 | orchestrator | 2025-03-23 00:09:44.238885 | orchestrator | TASK [include_role : glance] *************************************************** 2025-03-23 00:09:44.238898 | orchestrator | Sunday 23 March 2025 00:04:05 +0000 (0:00:00.556) 0:03:17.692 ********** 2025-03-23 00:09:44.238912 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.238926 | orchestrator | 2025-03-23 00:09:44.238940 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-03-23 00:09:44.238954 | orchestrator | Sunday 23 March 2025 00:04:07 +0000 (0:00:01.330) 0:03:19.022 ********** 2025-03-23 00:09:44.238981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 00:09:44.239025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.239048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 00:09:44.239086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.239115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-23 00:09:44.239136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.239149 | orchestrator | 2025-03-23 00:09:44.239162 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-03-23 00:09:44.239174 | orchestrator | Sunday 23 March 2025 00:04:15 +0000 (0:00:08.446) 0:03:27.469 ********** 2025-03-23 00:09:44.239205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 00:09:44.239235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.239248 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.239279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 00:09:44.239309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.239323 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.239351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-23 00:09:44.239379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.239393 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.239406 | orchestrator | 2025-03-23 00:09:44.239419 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-03-23 00:09:44.239431 | orchestrator | Sunday 23 March 2025 00:04:22 +0000 (0:00:06.507) 0:03:33.976 ********** 2025-03-23 00:09:44.239444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 00:09:44.239458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 00:09:44.239471 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.239498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 00:09:44.239519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 00:09:44.239533 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.239546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 00:09:44.239559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-23 00:09:44.239571 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.239599 | orchestrator | 2025-03-23 00:09:44.239612 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-03-23 00:09:44.239624 | orchestrator | Sunday 23 March 2025 00:04:28 +0000 (0:00:05.981) 0:03:39.958 ********** 2025-03-23 00:09:44.239637 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.239649 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.239662 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.239674 | orchestrator | 2025-03-23 00:09:44.239687 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-03-23 00:09:44.239699 | orchestrator | Sunday 23 March 2025 00:04:29 +0000 (0:00:01.758) 0:03:41.717 ********** 2025-03-23 00:09:44.239712 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.239724 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.239736 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.239749 | orchestrator | 2025-03-23 00:09:44.239761 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-03-23 00:09:44.239774 | orchestrator | Sunday 23 March 2025 00:04:32 +0000 (0:00:02.909) 0:03:44.627 ********** 2025-03-23 00:09:44.239786 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.239807 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.239820 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.239833 | orchestrator | 2025-03-23 00:09:44.239845 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-03-23 00:09:44.239857 | orchestrator | Sunday 23 March 2025 00:04:33 +0000 (0:00:00.496) 0:03:45.123 ********** 2025-03-23 00:09:44.239870 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.239882 | orchestrator | 2025-03-23 00:09:44.239894 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-03-23 00:09:44.239912 | orchestrator | Sunday 23 March 2025 00:04:34 +0000 (0:00:01.033) 0:03:46.156 ********** 2025-03-23 00:09:44.239925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 00:09:44.239953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 00:09:44.239967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-23 00:09:44.239980 | orchestrator | 2025-03-23 00:09:44.239993 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-03-23 00:09:44.240005 | orchestrator | Sunday 23 March 2025 00:04:38 +0000 (0:00:04.073) 0:03:50.230 ********** 2025-03-23 00:09:44.240027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 00:09:44.240041 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.240054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 00:09:44.240067 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.240080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-23 00:09:44.240098 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.240111 | orchestrator | 2025-03-23 00:09:44.240124 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-03-23 00:09:44.240140 | orchestrator | Sunday 23 March 2025 00:04:39 +0000 (0:00:00.568) 0:03:50.798 ********** 2025-03-23 00:09:44.240153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-23 00:09:44.240170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-23 00:09:44.240183 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.240196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-23 00:09:44.240223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-23 00:09:44.240237 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.240250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-23 00:09:44.240263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-23 00:09:44.240275 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.240288 | orchestrator | 2025-03-23 00:09:44.240300 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-03-23 00:09:44.240312 | orchestrator | Sunday 23 March 2025 00:04:39 +0000 (0:00:00.839) 0:03:51.638 ********** 2025-03-23 00:09:44.240325 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.240337 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.240349 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.240362 | orchestrator | 2025-03-23 00:09:44.240374 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-03-23 00:09:44.240387 | orchestrator | Sunday 23 March 2025 00:04:41 +0000 (0:00:01.718) 0:03:53.356 ********** 2025-03-23 00:09:44.240399 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.240411 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.240423 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.240436 | orchestrator | 2025-03-23 00:09:44.240448 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-03-23 00:09:44.240460 | orchestrator | Sunday 23 March 2025 00:04:43 +0000 (0:00:02.395) 0:03:55.752 ********** 2025-03-23 00:09:44.240472 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.240485 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.240497 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.240509 | orchestrator | 2025-03-23 00:09:44.240522 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-03-23 00:09:44.240534 | orchestrator | Sunday 23 March 2025 00:04:44 +0000 (0:00:00.357) 0:03:56.110 ********** 2025-03-23 00:09:44.240556 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.240568 | orchestrator | 2025-03-23 00:09:44.240593 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-03-23 00:09:44.240606 | orchestrator | Sunday 23 March 2025 00:04:45 +0000 (0:00:01.270) 0:03:57.381 ********** 2025-03-23 00:09:44.240619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 00:09:44.240668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 00:09:44.240695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-23 00:09:44.240719 | orchestrator | 2025-03-23 00:09:44.240733 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-03-23 00:09:44.240745 | orchestrator | Sunday 23 March 2025 00:04:50 +0000 (0:00:04.762) 0:04:02.143 ********** 2025-03-23 00:09:44.240758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 00:09:44.240777 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.240806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 00:09:44.240829 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.240842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-23 00:09:44.240862 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.240875 | orchestrator | 2025-03-23 00:09:44.240887 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-03-23 00:09:44.240900 | orchestrator | Sunday 23 March 2025 00:04:51 +0000 (0:00:00.865) 0:04:03.008 ********** 2025-03-23 00:09:44.240912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 00:09:44.240927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 00:09:44.240941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 00:09:44.240969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 00:09:44.240983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-23 00:09:44.240997 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.241014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 00:09:44.241028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 00:09:44.241047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 00:09:44.241060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 00:09:44.241073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 00:09:44.241085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 00:09:44.241098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-23 00:09:44.241111 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.241124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-23 00:09:44.241136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-23 00:09:44.241149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-23 00:09:44.241161 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.241174 | orchestrator | 2025-03-23 00:09:44.241186 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-03-23 00:09:44.241199 | orchestrator | Sunday 23 March 2025 00:04:52 +0000 (0:00:01.616) 0:04:04.625 ********** 2025-03-23 00:09:44.241211 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.241224 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.241236 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.241249 | orchestrator | 2025-03-23 00:09:44.241261 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-03-23 00:09:44.241274 | orchestrator | Sunday 23 March 2025 00:04:54 +0000 (0:00:01.470) 0:04:06.095 ********** 2025-03-23 00:09:44.241286 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.241298 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.241310 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.241323 | orchestrator | 2025-03-23 00:09:44.241349 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-03-23 00:09:44.241363 | orchestrator | Sunday 23 March 2025 00:04:57 +0000 (0:00:02.908) 0:04:09.003 ********** 2025-03-23 00:09:44.241382 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.241394 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.241407 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.241419 | orchestrator | 2025-03-23 00:09:44.241432 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-03-23 00:09:44.241444 | orchestrator | Sunday 23 March 2025 00:04:58 +0000 (0:00:00.963) 0:04:09.967 ********** 2025-03-23 00:09:44.241456 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.241469 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.241569 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.241629 | orchestrator | 2025-03-23 00:09:44.241643 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-03-23 00:09:44.241656 | orchestrator | Sunday 23 March 2025 00:04:58 +0000 (0:00:00.596) 0:04:10.564 ********** 2025-03-23 00:09:44.241668 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.241681 | orchestrator | 2025-03-23 00:09:44.241693 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-03-23 00:09:44.241705 | orchestrator | Sunday 23 March 2025 00:05:00 +0000 (0:00:01.543) 0:04:12.107 ********** 2025-03-23 00:09:44.241719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-03-23 00:09:44.241734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-03-23 00:09:44.241747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 00:09:44.241779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 00:09:44.241803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 00:09:44.241817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 00:09:44.241830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-03-23 00:09:44.241843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 00:09:44.241856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 00:09:44.241875 | orchestrator | 2025-03-23 00:09:44.241888 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-03-23 00:09:44.241900 | orchestrator | Sunday 23 March 2025 00:05:04 +0000 (0:00:04.268) 0:04:16.375 ********** 2025-03-23 00:09:44.241929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-03-23 00:09:44.241943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 00:09:44.241954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 00:09:44.241965 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.241976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-03-23 00:09:44.241987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 00:09:44.242038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 00:09:44.242052 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.242063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-03-23 00:09:44.242073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-23 00:09:44.242084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-23 00:09:44.242095 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.242105 | orchestrator | 2025-03-23 00:09:44.242115 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-03-23 00:09:44.242130 | orchestrator | Sunday 23 March 2025 00:05:05 +0000 (0:00:00.898) 0:04:17.273 ********** 2025-03-23 00:09:44.242140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-23 00:09:44.242154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-23 00:09:44.242170 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.242180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-23 00:09:44.242191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-23 00:09:44.242201 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.242211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-23 00:09:44.242235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-23 00:09:44.242246 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.242257 | orchestrator | 2025-03-23 00:09:44.242267 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-03-23 00:09:44.242277 | orchestrator | Sunday 23 March 2025 00:05:06 +0000 (0:00:01.275) 0:04:18.549 ********** 2025-03-23 00:09:44.242287 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.242297 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.242308 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.242318 | orchestrator | 2025-03-23 00:09:44.242328 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-03-23 00:09:44.242338 | orchestrator | Sunday 23 March 2025 00:05:08 +0000 (0:00:01.468) 0:04:20.018 ********** 2025-03-23 00:09:44.242348 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.242358 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.242368 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.242378 | orchestrator | 2025-03-23 00:09:44.242388 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-03-23 00:09:44.242398 | orchestrator | Sunday 23 March 2025 00:05:11 +0000 (0:00:02.777) 0:04:22.796 ********** 2025-03-23 00:09:44.242408 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.242418 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.242429 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.242446 | orchestrator | 2025-03-23 00:09:44.242456 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-03-23 00:09:44.242466 | orchestrator | Sunday 23 March 2025 00:05:11 +0000 (0:00:00.506) 0:04:23.303 ********** 2025-03-23 00:09:44.242476 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.242486 | orchestrator | 2025-03-23 00:09:44.242496 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-03-23 00:09:44.242506 | orchestrator | Sunday 23 March 2025 00:05:12 +0000 (0:00:01.312) 0:04:24.616 ********** 2025-03-23 00:09:44.242517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 00:09:44.242533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.242544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 00:09:44.242568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.242595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-23 00:09:44.242607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.242626 | orchestrator | 2025-03-23 00:09:44.242637 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-03-23 00:09:44.242647 | orchestrator | Sunday 23 March 2025 00:05:18 +0000 (0:00:05.173) 0:04:29.790 ********** 2025-03-23 00:09:44.242658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 00:09:44.242668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.242679 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.242701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 00:09:44.242713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.242729 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.242740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-23 00:09:44.242751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.242761 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.242771 | orchestrator | 2025-03-23 00:09:44.242781 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-03-23 00:09:44.242792 | orchestrator | Sunday 23 March 2025 00:05:18 +0000 (0:00:00.937) 0:04:30.727 ********** 2025-03-23 00:09:44.242806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-23 00:09:44.242817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-23 00:09:44.242827 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.242838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-23 00:09:44.242859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-23 00:09:44.242870 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.242880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-23 00:09:44.242891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-23 00:09:44.242901 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.242911 | orchestrator | 2025-03-23 00:09:44.242922 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-03-23 00:09:44.242932 | orchestrator | Sunday 23 March 2025 00:05:20 +0000 (0:00:01.380) 0:04:32.108 ********** 2025-03-23 00:09:44.242942 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.242952 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.242962 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.242977 | orchestrator | 2025-03-23 00:09:44.242987 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-03-23 00:09:44.242998 | orchestrator | Sunday 23 March 2025 00:05:21 +0000 (0:00:01.482) 0:04:33.590 ********** 2025-03-23 00:09:44.243008 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.243018 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.243028 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.243038 | orchestrator | 2025-03-23 00:09:44.243048 | orchestrator | TASK [include_role : manila] *************************************************** 2025-03-23 00:09:44.243058 | orchestrator | Sunday 23 March 2025 00:05:24 +0000 (0:00:02.561) 0:04:36.152 ********** 2025-03-23 00:09:44.243068 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.243078 | orchestrator | 2025-03-23 00:09:44.243088 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-03-23 00:09:44.243098 | orchestrator | Sunday 23 March 2025 00:05:25 +0000 (0:00:01.348) 0:04:37.500 ********** 2025-03-23 00:09:44.243112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-23 00:09:44.243123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-23 00:09:44.243183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-23 00:09:44.243205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243275 | orchestrator | 2025-03-23 00:09:44.243286 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-03-23 00:09:44.243296 | orchestrator | Sunday 23 March 2025 00:05:30 +0000 (0:00:05.046) 0:04:42.547 ********** 2025-03-23 00:09:44.243306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-23 00:09:44.243317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243367 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.243377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-23 00:09:44.243388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243419 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.243430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-23 00:09:44.243452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.243489 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.243500 | orchestrator | 2025-03-23 00:09:44.243510 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-03-23 00:09:44.243520 | orchestrator | Sunday 23 March 2025 00:05:32 +0000 (0:00:01.493) 0:04:44.040 ********** 2025-03-23 00:09:44.243530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-23 00:09:44.243541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-23 00:09:44.243551 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.243561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-23 00:09:44.243571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-23 00:09:44.243594 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.243605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-23 00:09:44.243615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-23 00:09:44.243626 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.243636 | orchestrator | 2025-03-23 00:09:44.243646 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-03-23 00:09:44.243656 | orchestrator | Sunday 23 March 2025 00:05:34 +0000 (0:00:01.806) 0:04:45.847 ********** 2025-03-23 00:09:44.243666 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.243676 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.243686 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.243701 | orchestrator | 2025-03-23 00:09:44.243711 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-03-23 00:09:44.243721 | orchestrator | Sunday 23 March 2025 00:05:35 +0000 (0:00:01.653) 0:04:47.500 ********** 2025-03-23 00:09:44.243731 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.243741 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.243751 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.243761 | orchestrator | 2025-03-23 00:09:44.243771 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-03-23 00:09:44.243785 | orchestrator | Sunday 23 March 2025 00:05:38 +0000 (0:00:02.750) 0:04:50.250 ********** 2025-03-23 00:09:44.243796 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.243806 | orchestrator | 2025-03-23 00:09:44.243816 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-03-23 00:09:44.243826 | orchestrator | Sunday 23 March 2025 00:05:39 +0000 (0:00:01.474) 0:04:51.725 ********** 2025-03-23 00:09:44.243836 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-23 00:09:44.243846 | orchestrator | 2025-03-23 00:09:44.243856 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-03-23 00:09:44.243877 | orchestrator | Sunday 23 March 2025 00:05:43 +0000 (0:00:03.776) 0:04:55.501 ********** 2025-03-23 00:09:44.243889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 00:09:44.243901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 00:09:44.243912 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.243934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 00:09:44.243952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 00:09:44.243962 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.243973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 00:09:44.243989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 00:09:44.244000 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244011 | orchestrator | 2025-03-23 00:09:44.244021 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-03-23 00:09:44.244031 | orchestrator | Sunday 23 March 2025 00:05:47 +0000 (0:00:03.563) 0:04:59.065 ********** 2025-03-23 00:09:44.244055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 00:09:44.244067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 00:09:44.244078 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 00:09:44.244119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 00:09:44.244131 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-23 00:09:44.244163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-23 00:09:44.244179 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244189 | orchestrator | 2025-03-23 00:09:44.244199 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-03-23 00:09:44.244209 | orchestrator | Sunday 23 March 2025 00:05:51 +0000 (0:00:03.712) 0:05:02.778 ********** 2025-03-23 00:09:44.244220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 00:09:44.244231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 00:09:44.244241 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 00:09:44.244275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 00:09:44.244286 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 00:09:44.244313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-23 00:09:44.244324 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244334 | orchestrator | 2025-03-23 00:09:44.244344 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-03-23 00:09:44.244354 | orchestrator | Sunday 23 March 2025 00:05:54 +0000 (0:00:03.734) 0:05:06.512 ********** 2025-03-23 00:09:44.244364 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.244374 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.244384 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.244394 | orchestrator | 2025-03-23 00:09:44.244404 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-03-23 00:09:44.244415 | orchestrator | Sunday 23 March 2025 00:05:56 +0000 (0:00:02.222) 0:05:08.735 ********** 2025-03-23 00:09:44.244425 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244435 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244445 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244455 | orchestrator | 2025-03-23 00:09:44.244464 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-03-23 00:09:44.244474 | orchestrator | Sunday 23 March 2025 00:05:58 +0000 (0:00:02.018) 0:05:10.754 ********** 2025-03-23 00:09:44.244484 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244495 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244504 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244514 | orchestrator | 2025-03-23 00:09:44.244525 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-03-23 00:09:44.244534 | orchestrator | Sunday 23 March 2025 00:05:59 +0000 (0:00:00.529) 0:05:11.283 ********** 2025-03-23 00:09:44.244544 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.244554 | orchestrator | 2025-03-23 00:09:44.244564 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-03-23 00:09:44.244574 | orchestrator | Sunday 23 March 2025 00:06:00 +0000 (0:00:01.401) 0:05:12.684 ********** 2025-03-23 00:09:44.244630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-23 00:09:44.244643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-23 00:09:44.244661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-23 00:09:44.244671 | orchestrator | 2025-03-23 00:09:44.244682 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-03-23 00:09:44.244692 | orchestrator | Sunday 23 March 2025 00:06:02 +0000 (0:00:01.541) 0:05:14.226 ********** 2025-03-23 00:09:44.244711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-23 00:09:44.244722 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-23 00:09:44.244743 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-23 00:09:44.244778 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244789 | orchestrator | 2025-03-23 00:09:44.244799 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-03-23 00:09:44.244809 | orchestrator | Sunday 23 March 2025 00:06:03 +0000 (0:00:00.783) 0:05:15.010 ********** 2025-03-23 00:09:44.244819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-23 00:09:44.244835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-23 00:09:44.244846 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244856 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-23 00:09:44.244877 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244887 | orchestrator | 2025-03-23 00:09:44.244897 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-03-23 00:09:44.244907 | orchestrator | Sunday 23 March 2025 00:06:04 +0000 (0:00:01.086) 0:05:16.096 ********** 2025-03-23 00:09:44.244916 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244925 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244933 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244946 | orchestrator | 2025-03-23 00:09:44.244955 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-03-23 00:09:44.244963 | orchestrator | Sunday 23 March 2025 00:06:04 +0000 (0:00:00.465) 0:05:16.562 ********** 2025-03-23 00:09:44.244972 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.244980 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.244989 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.244998 | orchestrator | 2025-03-23 00:09:44.245006 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-03-23 00:09:44.245015 | orchestrator | Sunday 23 March 2025 00:06:06 +0000 (0:00:01.638) 0:05:18.201 ********** 2025-03-23 00:09:44.245023 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.245032 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.245040 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.245049 | orchestrator | 2025-03-23 00:09:44.245057 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-03-23 00:09:44.245066 | orchestrator | Sunday 23 March 2025 00:06:07 +0000 (0:00:00.586) 0:05:18.787 ********** 2025-03-23 00:09:44.245077 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.245086 | orchestrator | 2025-03-23 00:09:44.245095 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-03-23 00:09:44.245103 | orchestrator | Sunday 23 March 2025 00:06:08 +0000 (0:00:01.956) 0:05:20.744 ********** 2025-03-23 00:09:44.245112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 00:09:44.245139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 00:09:44.245155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 00:09:44.245242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 00:09:44.245251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-23 00:09:44.245384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.245395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.245455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 00:09:44.245499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.245657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.245666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.245734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.245752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245801 | orchestrator | 2025-03-23 00:09:44.245811 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-03-23 00:09:44.245819 | orchestrator | Sunday 23 March 2025 00:06:15 +0000 (0:00:06.391) 0:05:27.135 ********** 2025-03-23 00:09:44.245828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 00:09:44.245837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 00:09:44.245902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.245958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.245973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.245990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.246041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 00:09:44.246050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.246120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.246143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 00:09:44.246152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246175 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.246200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-23 00:09:44.246233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.246278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.246319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-23 00:09:44.246328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.246399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.246433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246443 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.246452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.246474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.246492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-23 00:09:44.246507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-23 00:09:44.246543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-23 00:09:44.246552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.246561 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.246569 | orchestrator | 2025-03-23 00:09:44.246578 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-03-23 00:09:44.246600 | orchestrator | Sunday 23 March 2025 00:06:17 +0000 (0:00:01.970) 0:05:29.106 ********** 2025-03-23 00:09:44.246609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-23 00:09:44.246618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-23 00:09:44.246627 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.246638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-23 00:09:44.246647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-23 00:09:44.246656 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.246664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-23 00:09:44.246673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-23 00:09:44.246681 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.246690 | orchestrator | 2025-03-23 00:09:44.246698 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-03-23 00:09:44.246707 | orchestrator | Sunday 23 March 2025 00:06:19 +0000 (0:00:02.565) 0:05:31.671 ********** 2025-03-23 00:09:44.246716 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.246724 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.246733 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.246741 | orchestrator | 2025-03-23 00:09:44.246750 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-03-23 00:09:44.246758 | orchestrator | Sunday 23 March 2025 00:06:21 +0000 (0:00:01.272) 0:05:32.943 ********** 2025-03-23 00:09:44.246767 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.246776 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.246801 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.246811 | orchestrator | 2025-03-23 00:09:44.246820 | orchestrator | TASK [include_role : placement] ************************************************ 2025-03-23 00:09:44.246829 | orchestrator | Sunday 23 March 2025 00:06:23 +0000 (0:00:02.515) 0:05:35.459 ********** 2025-03-23 00:09:44.246837 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.246846 | orchestrator | 2025-03-23 00:09:44.246854 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-03-23 00:09:44.246863 | orchestrator | Sunday 23 March 2025 00:06:25 +0000 (0:00:01.745) 0:05:37.205 ********** 2025-03-23 00:09:44.246878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.246888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.246897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.246906 | orchestrator | 2025-03-23 00:09:44.246915 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-03-23 00:09:44.246923 | orchestrator | Sunday 23 March 2025 00:06:30 +0000 (0:00:04.565) 0:05:41.770 ********** 2025-03-23 00:09:44.246942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.246957 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.246966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.246975 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.246989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.246999 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.247007 | orchestrator | 2025-03-23 00:09:44.247016 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-03-23 00:09:44.247024 | orchestrator | Sunday 23 March 2025 00:06:30 +0000 (0:00:00.848) 0:05:42.618 ********** 2025-03-23 00:09:44.247033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247051 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.247060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247078 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.247086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247108 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.247117 | orchestrator | 2025-03-23 00:09:44.247125 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-03-23 00:09:44.247134 | orchestrator | Sunday 23 March 2025 00:06:32 +0000 (0:00:01.371) 0:05:43.989 ********** 2025-03-23 00:09:44.247142 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.247151 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.247159 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.247168 | orchestrator | 2025-03-23 00:09:44.247176 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-03-23 00:09:44.247185 | orchestrator | Sunday 23 March 2025 00:06:33 +0000 (0:00:01.480) 0:05:45.470 ********** 2025-03-23 00:09:44.247204 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.247214 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.247223 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.247232 | orchestrator | 2025-03-23 00:09:44.247240 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-03-23 00:09:44.247249 | orchestrator | Sunday 23 March 2025 00:06:36 +0000 (0:00:02.333) 0:05:47.804 ********** 2025-03-23 00:09:44.247257 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.247266 | orchestrator | 2025-03-23 00:09:44.247275 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-03-23 00:09:44.247283 | orchestrator | Sunday 23 March 2025 00:06:37 +0000 (0:00:01.745) 0:05:49.550 ********** 2025-03-23 00:09:44.247292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.247302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.247353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.247391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247409 | orchestrator | 2025-03-23 00:09:44.247418 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-03-23 00:09:44.247426 | orchestrator | Sunday 23 March 2025 00:06:43 +0000 (0:00:05.954) 0:05:55.505 ********** 2025-03-23 00:09:44.247447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.247457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247475 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.247494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.247504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247535 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.247544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.247559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.247614 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.247624 | orchestrator | 2025-03-23 00:09:44.247633 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-03-23 00:09:44.247642 | orchestrator | Sunday 23 March 2025 00:06:44 +0000 (0:00:01.112) 0:05:56.617 ********** 2025-03-23 00:09:44.247654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247690 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.247714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247757 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.247766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-23 00:09:44.247806 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.247815 | orchestrator | 2025-03-23 00:09:44.247824 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-03-23 00:09:44.247832 | orchestrator | Sunday 23 March 2025 00:06:46 +0000 (0:00:01.472) 0:05:58.090 ********** 2025-03-23 00:09:44.247841 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.247849 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.247858 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.247866 | orchestrator | 2025-03-23 00:09:44.247875 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-03-23 00:09:44.247884 | orchestrator | Sunday 23 March 2025 00:06:47 +0000 (0:00:01.605) 0:05:59.696 ********** 2025-03-23 00:09:44.247892 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.247901 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.247909 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.247918 | orchestrator | 2025-03-23 00:09:44.247927 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-03-23 00:09:44.247935 | orchestrator | Sunday 23 March 2025 00:06:50 +0000 (0:00:02.716) 0:06:02.412 ********** 2025-03-23 00:09:44.247944 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.247952 | orchestrator | 2025-03-23 00:09:44.247961 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-03-23 00:09:44.247969 | orchestrator | Sunday 23 March 2025 00:06:52 +0000 (0:00:01.704) 0:06:04.117 ********** 2025-03-23 00:09:44.247978 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-03-23 00:09:44.247987 | orchestrator | 2025-03-23 00:09:44.247996 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-03-23 00:09:44.248004 | orchestrator | Sunday 23 March 2025 00:06:53 +0000 (0:00:01.368) 0:06:05.485 ********** 2025-03-23 00:09:44.248013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-23 00:09:44.248022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-23 00:09:44.248043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-23 00:09:44.248053 | orchestrator | 2025-03-23 00:09:44.248062 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-03-23 00:09:44.248075 | orchestrator | Sunday 23 March 2025 00:06:59 +0000 (0:00:05.911) 0:06:11.397 ********** 2025-03-23 00:09:44.248083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248092 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248108 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248125 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248133 | orchestrator | 2025-03-23 00:09:44.248141 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-03-23 00:09:44.248149 | orchestrator | Sunday 23 March 2025 00:07:01 +0000 (0:00:01.566) 0:06:12.964 ********** 2025-03-23 00:09:44.248157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 00:09:44.248165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 00:09:44.248173 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 00:09:44.248192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 00:09:44.248200 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 00:09:44.248217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-23 00:09:44.248225 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248233 | orchestrator | 2025-03-23 00:09:44.248256 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-23 00:09:44.248265 | orchestrator | Sunday 23 March 2025 00:07:03 +0000 (0:00:02.467) 0:06:15.432 ********** 2025-03-23 00:09:44.248274 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.248282 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.248290 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.248301 | orchestrator | 2025-03-23 00:09:44.248309 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-23 00:09:44.248317 | orchestrator | Sunday 23 March 2025 00:07:07 +0000 (0:00:03.496) 0:06:18.928 ********** 2025-03-23 00:09:44.248325 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.248333 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.248341 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.248349 | orchestrator | 2025-03-23 00:09:44.248357 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-03-23 00:09:44.248365 | orchestrator | Sunday 23 March 2025 00:07:11 +0000 (0:00:04.121) 0:06:23.050 ********** 2025-03-23 00:09:44.248373 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-03-23 00:09:44.248381 | orchestrator | 2025-03-23 00:09:44.248389 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-03-23 00:09:44.248397 | orchestrator | Sunday 23 March 2025 00:07:12 +0000 (0:00:01.441) 0:06:24.491 ********** 2025-03-23 00:09:44.248412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248421 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248438 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248454 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248462 | orchestrator | 2025-03-23 00:09:44.248470 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-03-23 00:09:44.248478 | orchestrator | Sunday 23 March 2025 00:07:14 +0000 (0:00:01.879) 0:06:26.371 ********** 2025-03-23 00:09:44.248486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248498 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248515 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-23 00:09:44.248544 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248552 | orchestrator | 2025-03-23 00:09:44.248560 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-03-23 00:09:44.248568 | orchestrator | Sunday 23 March 2025 00:07:16 +0000 (0:00:02.097) 0:06:28.469 ********** 2025-03-23 00:09:44.248576 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248595 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248604 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248612 | orchestrator | 2025-03-23 00:09:44.248620 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-23 00:09:44.248628 | orchestrator | Sunday 23 March 2025 00:07:19 +0000 (0:00:02.455) 0:06:30.924 ********** 2025-03-23 00:09:44.248636 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.248644 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.248651 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.248660 | orchestrator | 2025-03-23 00:09:44.248667 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-23 00:09:44.248675 | orchestrator | Sunday 23 March 2025 00:07:22 +0000 (0:00:03.772) 0:06:34.697 ********** 2025-03-23 00:09:44.248683 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.248691 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.248699 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.248707 | orchestrator | 2025-03-23 00:09:44.248715 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-03-23 00:09:44.248723 | orchestrator | Sunday 23 March 2025 00:07:26 +0000 (0:00:03.739) 0:06:38.436 ********** 2025-03-23 00:09:44.248731 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-03-23 00:09:44.248740 | orchestrator | 2025-03-23 00:09:44.248748 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-03-23 00:09:44.248759 | orchestrator | Sunday 23 March 2025 00:07:28 +0000 (0:00:02.021) 0:06:40.458 ********** 2025-03-23 00:09:44.248773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 00:09:44.248786 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 00:09:44.248803 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 00:09:44.248819 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248827 | orchestrator | 2025-03-23 00:09:44.248835 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-03-23 00:09:44.248844 | orchestrator | Sunday 23 March 2025 00:07:31 +0000 (0:00:02.377) 0:06:42.836 ********** 2025-03-23 00:09:44.248863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 00:09:44.248872 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 00:09:44.248889 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-23 00:09:44.248905 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248913 | orchestrator | 2025-03-23 00:09:44.248921 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-03-23 00:09:44.248929 | orchestrator | Sunday 23 March 2025 00:07:33 +0000 (0:00:01.965) 0:06:44.802 ********** 2025-03-23 00:09:44.248937 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.248945 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.248953 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.248961 | orchestrator | 2025-03-23 00:09:44.248969 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-23 00:09:44.248981 | orchestrator | Sunday 23 March 2025 00:07:35 +0000 (0:00:02.170) 0:06:46.972 ********** 2025-03-23 00:09:44.248989 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.248997 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.249005 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.249013 | orchestrator | 2025-03-23 00:09:44.249021 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-23 00:09:44.249029 | orchestrator | Sunday 23 March 2025 00:07:39 +0000 (0:00:04.100) 0:06:51.073 ********** 2025-03-23 00:09:44.249037 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.249045 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.249053 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.249061 | orchestrator | 2025-03-23 00:09:44.249069 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-03-23 00:09:44.249077 | orchestrator | Sunday 23 March 2025 00:07:43 +0000 (0:00:04.267) 0:06:55.341 ********** 2025-03-23 00:09:44.249085 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.249093 | orchestrator | 2025-03-23 00:09:44.249101 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-03-23 00:09:44.249109 | orchestrator | Sunday 23 March 2025 00:07:45 +0000 (0:00:01.964) 0:06:57.305 ********** 2025-03-23 00:09:44.249117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.249125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 00:09:44.249155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.249186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.249200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 00:09:44.249209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.249246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.249260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 00:09:44.249275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.249300 | orchestrator | 2025-03-23 00:09:44.249319 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-03-23 00:09:44.249328 | orchestrator | Sunday 23 March 2025 00:07:50 +0000 (0:00:05.271) 0:07:02.576 ********** 2025-03-23 00:09:44.249336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.249349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 00:09:44.249358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.249388 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.249408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.249417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 00:09:44.249429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.249461 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.249469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.249477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-23 00:09:44.249497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-23 00:09:44.249519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-23 00:09:44.249527 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.249536 | orchestrator | 2025-03-23 00:09:44.249544 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-03-23 00:09:44.249552 | orchestrator | Sunday 23 March 2025 00:07:51 +0000 (0:00:01.163) 0:07:03.740 ********** 2025-03-23 00:09:44.249560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 00:09:44.249568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 00:09:44.249577 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.249597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 00:09:44.249605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 00:09:44.249613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 00:09:44.249621 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.249630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-23 00:09:44.249638 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.249646 | orchestrator | 2025-03-23 00:09:44.249654 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-03-23 00:09:44.249662 | orchestrator | Sunday 23 March 2025 00:07:53 +0000 (0:00:01.730) 0:07:05.471 ********** 2025-03-23 00:09:44.249670 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.249678 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.249686 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.249693 | orchestrator | 2025-03-23 00:09:44.249701 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-03-23 00:09:44.249714 | orchestrator | Sunday 23 March 2025 00:07:55 +0000 (0:00:01.604) 0:07:07.076 ********** 2025-03-23 00:09:44.249722 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.249730 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.249738 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.249746 | orchestrator | 2025-03-23 00:09:44.249754 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-03-23 00:09:44.249762 | orchestrator | Sunday 23 March 2025 00:07:57 +0000 (0:00:02.546) 0:07:09.622 ********** 2025-03-23 00:09:44.249780 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.249789 | orchestrator | 2025-03-23 00:09:44.249800 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-03-23 00:09:44.249809 | orchestrator | Sunday 23 March 2025 00:07:59 +0000 (0:00:01.589) 0:07:11.212 ********** 2025-03-23 00:09:44.249823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 00:09:44.249833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 00:09:44.249841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-23 00:09:44.249849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 00:09:44.249880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 00:09:44.249890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-23 00:09:44.249899 | orchestrator | 2025-03-23 00:09:44.249907 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-03-23 00:09:44.249915 | orchestrator | Sunday 23 March 2025 00:08:06 +0000 (0:00:07.047) 0:07:18.260 ********** 2025-03-23 00:09:44.249924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 00:09:44.249938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 00:09:44.249951 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.249971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 00:09:44.249980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 00:09:44.249989 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.249997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-23 00:09:44.250011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-23 00:09:44.250044 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.250052 | orchestrator | 2025-03-23 00:09:44.250060 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-03-23 00:09:44.250068 | orchestrator | Sunday 23 March 2025 00:08:07 +0000 (0:00:01.038) 0:07:19.298 ********** 2025-03-23 00:09:44.250077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-23 00:09:44.250096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 00:09:44.250105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 00:09:44.250113 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.250122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-23 00:09:44.250130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 00:09:44.250138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 00:09:44.250147 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.250155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-23 00:09:44.250163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 00:09:44.250171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-23 00:09:44.250179 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.250187 | orchestrator | 2025-03-23 00:09:44.250195 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-03-23 00:09:44.250203 | orchestrator | Sunday 23 March 2025 00:08:09 +0000 (0:00:01.470) 0:07:20.769 ********** 2025-03-23 00:09:44.250211 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.250226 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.250235 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.250243 | orchestrator | 2025-03-23 00:09:44.250255 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-03-23 00:09:44.250267 | orchestrator | Sunday 23 March 2025 00:08:09 +0000 (0:00:00.572) 0:07:21.341 ********** 2025-03-23 00:09:44.250275 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.250283 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.250291 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.250299 | orchestrator | 2025-03-23 00:09:44.250307 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-03-23 00:09:44.250315 | orchestrator | Sunday 23 March 2025 00:08:11 +0000 (0:00:01.815) 0:07:23.156 ********** 2025-03-23 00:09:44.250323 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.250331 | orchestrator | 2025-03-23 00:09:44.250339 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-03-23 00:09:44.250347 | orchestrator | Sunday 23 March 2025 00:08:13 +0000 (0:00:01.936) 0:07:25.093 ********** 2025-03-23 00:09:44.250355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 00:09:44.250374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 00:09:44.250384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 00:09:44.250423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 00:09:44.250432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-23 00:09:44.250484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 00:09:44.250497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 00:09:44.250549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 00:09:44.250558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 00:09:44.250627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 00:09:44.250636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-23 00:09:44.250694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 00:09:44.250702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250746 | orchestrator | 2025-03-23 00:09:44.250754 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-03-23 00:09:44.250762 | orchestrator | Sunday 23 March 2025 00:08:18 +0000 (0:00:05.395) 0:07:30.488 ********** 2025-03-23 00:09:44.250770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 00:09:44.250779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 00:09:44.250794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 00:09:44.250832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 00:09:44.250841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250889 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.250897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 00:09:44.250905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 00:09:44.250914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.250948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 00:09:44.250961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 00:09:44.250969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.250977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-23 00:09:44.250986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-23 00:09:44.251012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.251027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251052 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.251069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-23 00:09:44.251087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-23 00:09:44.251100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-23 00:09:44.251125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-23 00:09:44.251133 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251141 | orchestrator | 2025-03-23 00:09:44.251149 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-03-23 00:09:44.251157 | orchestrator | Sunday 23 March 2025 00:08:20 +0000 (0:00:01.641) 0:07:32.129 ********** 2025-03-23 00:09:44.251165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-23 00:09:44.251174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-23 00:09:44.251182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 00:09:44.251190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 00:09:44.251202 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-23 00:09:44.251223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-23 00:09:44.251231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 00:09:44.251239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 00:09:44.251248 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-23 00:09:44.251264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-23 00:09:44.251272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 00:09:44.251280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-23 00:09:44.251289 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251297 | orchestrator | 2025-03-23 00:09:44.251305 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-03-23 00:09:44.251313 | orchestrator | Sunday 23 March 2025 00:08:22 +0000 (0:00:01.633) 0:07:33.762 ********** 2025-03-23 00:09:44.251321 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251329 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251337 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251345 | orchestrator | 2025-03-23 00:09:44.251353 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-03-23 00:09:44.251361 | orchestrator | Sunday 23 March 2025 00:08:22 +0000 (0:00:00.700) 0:07:34.463 ********** 2025-03-23 00:09:44.251369 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251377 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251385 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251392 | orchestrator | 2025-03-23 00:09:44.251400 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-03-23 00:09:44.251408 | orchestrator | Sunday 23 March 2025 00:08:24 +0000 (0:00:01.840) 0:07:36.304 ********** 2025-03-23 00:09:44.251416 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.251424 | orchestrator | 2025-03-23 00:09:44.251432 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-03-23 00:09:44.251440 | orchestrator | Sunday 23 March 2025 00:08:26 +0000 (0:00:01.626) 0:07:37.930 ********** 2025-03-23 00:09:44.251452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:09:44.251474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:09:44.251483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-23 00:09:44.251491 | orchestrator | 2025-03-23 00:09:44.251500 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-03-23 00:09:44.251508 | orchestrator | Sunday 23 March 2025 00:08:29 +0000 (0:00:03.280) 0:07:41.210 ********** 2025-03-23 00:09:44.251516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-23 00:09:44.251531 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-23 00:09:44.251553 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-23 00:09:44.251574 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251616 | orchestrator | 2025-03-23 00:09:44.251626 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-03-23 00:09:44.251634 | orchestrator | Sunday 23 March 2025 00:08:30 +0000 (0:00:00.946) 0:07:42.156 ********** 2025-03-23 00:09:44.251642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-23 00:09:44.251650 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-23 00:09:44.251666 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-23 00:09:44.251683 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251691 | orchestrator | 2025-03-23 00:09:44.251699 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-03-23 00:09:44.251707 | orchestrator | Sunday 23 March 2025 00:08:31 +0000 (0:00:00.866) 0:07:43.023 ********** 2025-03-23 00:09:44.251715 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251723 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251731 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251739 | orchestrator | 2025-03-23 00:09:44.251747 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-03-23 00:09:44.251755 | orchestrator | Sunday 23 March 2025 00:08:32 +0000 (0:00:00.800) 0:07:43.824 ********** 2025-03-23 00:09:44.251767 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.251775 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.251783 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.251791 | orchestrator | 2025-03-23 00:09:44.251799 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-03-23 00:09:44.251807 | orchestrator | Sunday 23 March 2025 00:08:34 +0000 (0:00:02.513) 0:07:46.337 ********** 2025-03-23 00:09:44.251815 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-23 00:09:44.251823 | orchestrator | 2025-03-23 00:09:44.251831 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-03-23 00:09:44.251839 | orchestrator | Sunday 23 March 2025 00:08:36 +0000 (0:00:01.989) 0:07:48.327 ********** 2025-03-23 00:09:44.251847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.251860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.251869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.251877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.251900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.251909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-23 00:09:44.251917 | orchestrator | 2025-03-23 00:09:44.251925 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-03-23 00:09:44.251937 | orchestrator | Sunday 23 March 2025 00:08:45 +0000 (0:00:08.850) 0:07:57.178 ********** 2025-03-23 00:09:44.251946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.251955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.252040 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.252058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.252066 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.252087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-23 00:09:44.252101 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252109 | orchestrator | 2025-03-23 00:09:44.252117 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-03-23 00:09:44.252125 | orchestrator | Sunday 23 March 2025 00:08:46 +0000 (0:00:01.279) 0:07:58.458 ********** 2025-03-23 00:09:44.252133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252163 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252198 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-23 00:09:44.252243 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252250 | orchestrator | 2025-03-23 00:09:44.252258 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-03-23 00:09:44.252269 | orchestrator | Sunday 23 March 2025 00:08:48 +0000 (0:00:01.646) 0:08:00.105 ********** 2025-03-23 00:09:44.252276 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.252283 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.252290 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.252297 | orchestrator | 2025-03-23 00:09:44.252304 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-03-23 00:09:44.252311 | orchestrator | Sunday 23 March 2025 00:08:49 +0000 (0:00:01.288) 0:08:01.393 ********** 2025-03-23 00:09:44.252318 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.252325 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.252332 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.252339 | orchestrator | 2025-03-23 00:09:44.252346 | orchestrator | TASK [include_role : swift] **************************************************** 2025-03-23 00:09:44.252353 | orchestrator | Sunday 23 March 2025 00:08:52 +0000 (0:00:02.864) 0:08:04.258 ********** 2025-03-23 00:09:44.252360 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252367 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252374 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252381 | orchestrator | 2025-03-23 00:09:44.252388 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-03-23 00:09:44.252395 | orchestrator | Sunday 23 March 2025 00:08:53 +0000 (0:00:00.597) 0:08:04.856 ********** 2025-03-23 00:09:44.252402 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252409 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252416 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252423 | orchestrator | 2025-03-23 00:09:44.252430 | orchestrator | TASK [include_role : trove] **************************************************** 2025-03-23 00:09:44.252437 | orchestrator | Sunday 23 March 2025 00:08:53 +0000 (0:00:00.591) 0:08:05.447 ********** 2025-03-23 00:09:44.252444 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252451 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252458 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252468 | orchestrator | 2025-03-23 00:09:44.252475 | orchestrator | TASK [include_role : venus] **************************************************** 2025-03-23 00:09:44.252482 | orchestrator | Sunday 23 March 2025 00:08:54 +0000 (0:00:00.338) 0:08:05.785 ********** 2025-03-23 00:09:44.252489 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252496 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252503 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252510 | orchestrator | 2025-03-23 00:09:44.252517 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-03-23 00:09:44.252524 | orchestrator | Sunday 23 March 2025 00:08:54 +0000 (0:00:00.649) 0:08:06.435 ********** 2025-03-23 00:09:44.252531 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252538 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252545 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252552 | orchestrator | 2025-03-23 00:09:44.252559 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-03-23 00:09:44.252566 | orchestrator | Sunday 23 March 2025 00:08:55 +0000 (0:00:00.634) 0:08:07.069 ********** 2025-03-23 00:09:44.252573 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.252591 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.252599 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.252606 | orchestrator | 2025-03-23 00:09:44.252613 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-03-23 00:09:44.252620 | orchestrator | Sunday 23 March 2025 00:08:56 +0000 (0:00:00.824) 0:08:07.893 ********** 2025-03-23 00:09:44.252627 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252634 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252641 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252648 | orchestrator | 2025-03-23 00:09:44.252655 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-03-23 00:09:44.252666 | orchestrator | Sunday 23 March 2025 00:08:57 +0000 (0:00:01.072) 0:08:08.966 ********** 2025-03-23 00:09:44.252673 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252680 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252687 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252694 | orchestrator | 2025-03-23 00:09:44.252701 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-03-23 00:09:44.252708 | orchestrator | Sunday 23 March 2025 00:08:57 +0000 (0:00:00.693) 0:08:09.660 ********** 2025-03-23 00:09:44.252715 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252722 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252729 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252736 | orchestrator | 2025-03-23 00:09:44.252743 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-03-23 00:09:44.252750 | orchestrator | Sunday 23 March 2025 00:08:59 +0000 (0:00:01.115) 0:08:10.775 ********** 2025-03-23 00:09:44.252757 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252764 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252771 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252778 | orchestrator | 2025-03-23 00:09:44.252785 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-03-23 00:09:44.252792 | orchestrator | Sunday 23 March 2025 00:09:00 +0000 (0:00:01.429) 0:08:12.205 ********** 2025-03-23 00:09:44.252799 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252806 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252812 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252819 | orchestrator | 2025-03-23 00:09:44.252830 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-03-23 00:09:44.252838 | orchestrator | Sunday 23 March 2025 00:09:02 +0000 (0:00:01.550) 0:08:13.755 ********** 2025-03-23 00:09:44.252845 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.252852 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.252859 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.252866 | orchestrator | 2025-03-23 00:09:44.252876 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-03-23 00:09:44.252884 | orchestrator | Sunday 23 March 2025 00:09:07 +0000 (0:00:05.941) 0:08:19.697 ********** 2025-03-23 00:09:44.252890 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252897 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252904 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252911 | orchestrator | 2025-03-23 00:09:44.252918 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-03-23 00:09:44.252925 | orchestrator | Sunday 23 March 2025 00:09:11 +0000 (0:00:03.228) 0:08:22.926 ********** 2025-03-23 00:09:44.252932 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.252939 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.252946 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.252953 | orchestrator | 2025-03-23 00:09:44.252960 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-03-23 00:09:44.252967 | orchestrator | Sunday 23 March 2025 00:09:19 +0000 (0:00:08.034) 0:08:30.960 ********** 2025-03-23 00:09:44.252974 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.252981 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.252988 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.252995 | orchestrator | 2025-03-23 00:09:44.253002 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-03-23 00:09:44.253009 | orchestrator | Sunday 23 March 2025 00:09:23 +0000 (0:00:04.337) 0:08:35.297 ********** 2025-03-23 00:09:44.253016 | orchestrator | changed: [testbed-node-0] 2025-03-23 00:09:44.253023 | orchestrator | changed: [testbed-node-1] 2025-03-23 00:09:44.253030 | orchestrator | changed: [testbed-node-2] 2025-03-23 00:09:44.253037 | orchestrator | 2025-03-23 00:09:44.253044 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-03-23 00:09:44.253051 | orchestrator | Sunday 23 March 2025 00:09:34 +0000 (0:00:10.812) 0:08:46.110 ********** 2025-03-23 00:09:44.253058 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.253069 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.253076 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.253083 | orchestrator | 2025-03-23 00:09:44.253090 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-03-23 00:09:44.253097 | orchestrator | Sunday 23 March 2025 00:09:35 +0000 (0:00:00.664) 0:08:46.775 ********** 2025-03-23 00:09:44.253103 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.253111 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.253117 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.253124 | orchestrator | 2025-03-23 00:09:44.253131 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-03-23 00:09:44.253138 | orchestrator | Sunday 23 March 2025 00:09:35 +0000 (0:00:00.667) 0:08:47.442 ********** 2025-03-23 00:09:44.253145 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.253152 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.253159 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.253166 | orchestrator | 2025-03-23 00:09:44.253173 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-03-23 00:09:44.253180 | orchestrator | Sunday 23 March 2025 00:09:36 +0000 (0:00:00.377) 0:08:47.820 ********** 2025-03-23 00:09:44.253187 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.253194 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.253201 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.253211 | orchestrator | 2025-03-23 00:09:44.253218 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-03-23 00:09:44.253225 | orchestrator | Sunday 23 March 2025 00:09:36 +0000 (0:00:00.655) 0:08:48.475 ********** 2025-03-23 00:09:44.253232 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.253239 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.253246 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.253253 | orchestrator | 2025-03-23 00:09:44.253260 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-03-23 00:09:44.253267 | orchestrator | Sunday 23 March 2025 00:09:37 +0000 (0:00:00.656) 0:08:49.132 ********** 2025-03-23 00:09:44.253274 | orchestrator | skipping: [testbed-node-0] 2025-03-23 00:09:44.253281 | orchestrator | skipping: [testbed-node-1] 2025-03-23 00:09:44.253288 | orchestrator | skipping: [testbed-node-2] 2025-03-23 00:09:44.253295 | orchestrator | 2025-03-23 00:09:44.253302 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-03-23 00:09:44.253308 | orchestrator | Sunday 23 March 2025 00:09:38 +0000 (0:00:00.804) 0:08:49.936 ********** 2025-03-23 00:09:44.253315 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.253322 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.253330 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.253336 | orchestrator | 2025-03-23 00:09:44.253344 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-03-23 00:09:44.253350 | orchestrator | Sunday 23 March 2025 00:09:39 +0000 (0:00:01.366) 0:08:51.302 ********** 2025-03-23 00:09:44.253357 | orchestrator | ok: [testbed-node-0] 2025-03-23 00:09:44.253364 | orchestrator | ok: [testbed-node-1] 2025-03-23 00:09:44.253371 | orchestrator | ok: [testbed-node-2] 2025-03-23 00:09:44.253378 | orchestrator | 2025-03-23 00:09:44.253385 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-23 00:09:44.253392 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-03-23 00:09:44.253400 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-03-23 00:09:44.253406 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-03-23 00:09:44.253414 | orchestrator | 2025-03-23 00:09:44.253420 | orchestrator | 2025-03-23 00:09:44.253427 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-23 00:09:44.253441 | orchestrator | Sunday 23 March 2025 00:09:40 +0000 (0:00:01.169) 0:08:52.471 ********** 2025-03-23 00:09:47.279361 | orchestrator | =============================================================================== 2025-03-23 00:09:47.279487 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.81s 2025-03-23 00:09:47.279506 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.85s 2025-03-23 00:09:47.279521 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 8.82s 2025-03-23 00:09:47.279536 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 8.45s 2025-03-23 00:09:47.279569 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.03s 2025-03-23 00:09:47.279611 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.78s 2025-03-23 00:09:47.279626 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.44s 2025-03-23 00:09:47.279641 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.37s 2025-03-23 00:09:47.279655 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.05s 2025-03-23 00:09:47.279669 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 7.02s 2025-03-23 00:09:47.279683 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.89s 2025-03-23 00:09:47.279697 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.83s 2025-03-23 00:09:47.279712 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 6.51s 2025-03-23 00:09:47.279727 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.39s 2025-03-23 00:09:47.279742 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.98s 2025-03-23 00:09:47.279756 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.95s 2025-03-23 00:09:47.279770 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.94s 2025-03-23 00:09:47.279783 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.91s 2025-03-23 00:09:47.279798 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.40s 2025-03-23 00:09:47.279812 | orchestrator | loadbalancer : Copying over haproxy start script ------------------------ 5.29s 2025-03-23 00:09:47.279826 | orchestrator | 2025-03-23 00:09:44 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:09:47.279842 | orchestrator | 2025-03-23 00:09:44 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:47.279856 | orchestrator | 2025-03-23 00:09:44 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:47.279888 | orchestrator | 2025-03-23 00:09:47 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:09:47.283051 | orchestrator | 2025-03-23 00:09:47 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:09:47.286510 | orchestrator | 2025-03-23 00:09:47 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:50.324042 | orchestrator | 2025-03-23 00:09:47 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:50.324172 | orchestrator | 2025-03-23 00:09:50 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:09:50.324891 | orchestrator | 2025-03-23 00:09:50 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:09:50.326314 | orchestrator | 2025-03-23 00:09:50 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:53.370892 | orchestrator | 2025-03-23 00:09:50 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:53.370971 | orchestrator | 2025-03-23 00:09:53 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:09:53.371438 | orchestrator | 2025-03-23 00:09:53 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:09:53.373180 | orchestrator | 2025-03-23 00:09:53 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:53.373273 | orchestrator | 2025-03-23 00:09:53 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:56.424646 | orchestrator | 2025-03-23 00:09:56 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:09:56.427634 | orchestrator | 2025-03-23 00:09:56 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:09:56.427680 | orchestrator | 2025-03-23 00:09:56 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:09:59.475963 | orchestrator | 2025-03-23 00:09:56 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:09:59.476099 | orchestrator | 2025-03-23 00:09:59 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:09:59.476545 | orchestrator | 2025-03-23 00:09:59 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:09:59.478390 | orchestrator | 2025-03-23 00:09:59 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:10:02.524538 | orchestrator | 2025-03-23 00:09:59 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:10:02.524756 | orchestrator | 2025-03-23 00:10:02 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:10:02.530332 | orchestrator | 2025-03-23 00:10:02 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:10:02.532527 | orchestrator | 2025-03-23 00:10:02 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:10:05.567381 | orchestrator | 2025-03-23 00:10:02 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:10:05.567527 | orchestrator | 2025-03-23 00:10:05 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:10:05.568171 | orchestrator | 2025-03-23 00:10:05 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:10:05.568251 | orchestrator | 2025-03-23 00:10:05 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:10:05.568328 | orchestrator | 2025-03-23 00:10:05 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:10:08.609874 | orchestrator | 2025-03-23 00:10:08 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:10:08.610727 | orchestrator | 2025-03-23 00:10:08 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:10:08.611406 | orchestrator | 2025-03-23 00:10:08 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:10:08.611530 | orchestrator | 2025-03-23 00:10:08 | INFO  | Wait 1 second(s) until the next check 2025-03-23 00:10:11.672870 | orchestrator | 2025-03-23 00:10:11 | INFO  | Task e20548fb-cbfb-47e5-aaeb-6b78d0faaa3d is in state STARTED 2025-03-23 00:10:11.673815 | orchestrator | 2025-03-23 00:10:11 | INFO  | Task 443febac-0822-4ada-8e43-97626da28a77 is in state STARTED 2025-03-23 00:10:11.673862 | orchestrator | 2025-03-23 00:10:11 | INFO  | Task 318a89c3-b7d5-4ebd-a603-8dc723b99788 is in state STARTED 2025-03-23 00:10:12.958198 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-03-23 00:10:12.963996 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-23 00:10:13.683901 | 2025-03-23 00:10:13.684059 | PLAY [Post output play] 2025-03-23 00:10:13.714170 | 2025-03-23 00:10:13.714309 | LOOP [stage-output : Register sources] 2025-03-23 00:10:13.801672 | 2025-03-23 00:10:13.801954 | TASK [stage-output : Check sudo] 2025-03-23 00:10:14.503760 | orchestrator | sudo: a password is required 2025-03-23 00:10:14.844963 | orchestrator | ok: Runtime: 0:00:00.015103 2025-03-23 00:10:14.869192 | 2025-03-23 00:10:14.869426 | LOOP [stage-output : Set source and destination for files and folders] 2025-03-23 00:10:14.906298 | 2025-03-23 00:10:14.906479 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-03-23 00:10:14.999507 | orchestrator | ok 2025-03-23 00:10:15.010108 | 2025-03-23 00:10:15.010218 | LOOP [stage-output : Ensure target folders exist] 2025-03-23 00:10:15.470051 | orchestrator | ok: "docs" 2025-03-23 00:10:15.470404 | 2025-03-23 00:10:15.703563 | orchestrator | ok: "artifacts" 2025-03-23 00:10:15.933607 | orchestrator | ok: "logs" 2025-03-23 00:10:15.958440 | 2025-03-23 00:10:15.958605 | LOOP [stage-output : Copy files and folders to staging folder] 2025-03-23 00:10:15.999675 | 2025-03-23 00:10:15.999955 | TASK [stage-output : Make all log files readable] 2025-03-23 00:10:16.285936 | orchestrator | ok 2025-03-23 00:10:16.296489 | 2025-03-23 00:10:16.296613 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-03-23 00:10:16.342066 | orchestrator | skipping: Conditional result was False 2025-03-23 00:10:16.359754 | 2025-03-23 00:10:16.359894 | TASK [stage-output : Discover log files for compression] 2025-03-23 00:10:16.384778 | orchestrator | skipping: Conditional result was False 2025-03-23 00:10:16.398664 | 2025-03-23 00:10:16.398828 | LOOP [stage-output : Archive everything from logs] 2025-03-23 00:10:16.469833 | 2025-03-23 00:10:16.469984 | PLAY [Post cleanup play] 2025-03-23 00:10:16.494238 | 2025-03-23 00:10:16.494449 | TASK [Set cloud fact (Zuul deployment)] 2025-03-23 00:10:16.556983 | orchestrator | ok 2025-03-23 00:10:16.566426 | 2025-03-23 00:10:16.566529 | TASK [Set cloud fact (local deployment)] 2025-03-23 00:10:16.611145 | orchestrator | skipping: Conditional result was False 2025-03-23 00:10:16.636716 | 2025-03-23 00:10:16.637094 | TASK [Clean the cloud environment] 2025-03-23 00:10:17.256875 | orchestrator | 2025-03-23 00:10:17 - clean up servers 2025-03-23 00:10:18.161450 | orchestrator | 2025-03-23 00:10:18 - testbed-manager 2025-03-23 00:10:18.246424 | orchestrator | 2025-03-23 00:10:18 - testbed-node-3 2025-03-23 00:10:18.347030 | orchestrator | 2025-03-23 00:10:18 - testbed-node-0 2025-03-23 00:10:18.436052 | orchestrator | 2025-03-23 00:10:18 - testbed-node-5 2025-03-23 00:10:18.525188 | orchestrator | 2025-03-23 00:10:18 - testbed-node-1 2025-03-23 00:10:18.612866 | orchestrator | 2025-03-23 00:10:18 - testbed-node-2 2025-03-23 00:10:18.707654 | orchestrator | 2025-03-23 00:10:18 - testbed-node-4 2025-03-23 00:10:18.792640 | orchestrator | 2025-03-23 00:10:18 - clean up keypairs 2025-03-23 00:10:18.811201 | orchestrator | 2025-03-23 00:10:18 - testbed 2025-03-23 00:10:18.837918 | orchestrator | 2025-03-23 00:10:18 - wait for servers to be gone 2025-03-23 00:10:32.238936 | orchestrator | 2025-03-23 00:10:32 - clean up ports 2025-03-23 00:10:32.440861 | orchestrator | 2025-03-23 00:10:32 - 1972c6d6-3390-44d1-9707-d3682d47c7d3 2025-03-23 00:10:32.667377 | orchestrator | 2025-03-23 00:10:32 - 24ae19bf-fc0e-4844-be42-accec51cc877 2025-03-23 00:10:32.912218 | orchestrator | 2025-03-23 00:10:32 - 330013b0-4bb0-49fd-b3e2-762f8c58d5ff 2025-03-23 00:10:33.098912 | orchestrator | 2025-03-23 00:10:33 - 337b6519-de4f-4457-8416-29ed40a26071 2025-03-23 00:10:33.316052 | orchestrator | 2025-03-23 00:10:33 - 3e5f230b-f6c1-46b3-9916-29fa69fe7db2 2025-03-23 00:10:33.495709 | orchestrator | 2025-03-23 00:10:33 - 5dfa4794-f4fe-4750-a561-18f901a45232 2025-03-23 00:10:33.674860 | orchestrator | 2025-03-23 00:10:33 - 90c9586f-1857-46c4-83d8-082e9c74e49b 2025-03-23 00:10:34.011390 | orchestrator | 2025-03-23 00:10:34 - clean up volumes 2025-03-23 00:10:34.162981 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-5-node-base 2025-03-23 00:10:34.209244 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-1-node-base 2025-03-23 00:10:34.246520 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-4-node-base 2025-03-23 00:10:34.285974 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-manager-base 2025-03-23 00:10:34.325159 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-2-node-base 2025-03-23 00:10:34.364598 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-0-node-base 2025-03-23 00:10:34.404973 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-3-node-base 2025-03-23 00:10:34.441892 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-13-node-1 2025-03-23 00:10:34.478689 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-8-node-2 2025-03-23 00:10:34.516535 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-14-node-2 2025-03-23 00:10:34.555473 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-5-node-5 2025-03-23 00:10:34.596440 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-2-node-2 2025-03-23 00:10:34.643293 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-16-node-4 2025-03-23 00:10:34.680732 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-15-node-3 2025-03-23 00:10:34.716786 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-17-node-5 2025-03-23 00:10:34.758636 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-9-node-3 2025-03-23 00:10:34.796063 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-10-node-4 2025-03-23 00:10:34.838105 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-3-node-3 2025-03-23 00:10:34.877474 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-6-node-0 2025-03-23 00:10:34.915819 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-11-node-5 2025-03-23 00:10:34.954361 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-1-node-1 2025-03-23 00:10:34.993980 | orchestrator | 2025-03-23 00:10:34 - testbed-volume-12-node-0 2025-03-23 00:10:35.034661 | orchestrator | 2025-03-23 00:10:35 - testbed-volume-0-node-0 2025-03-23 00:10:35.072773 | orchestrator | 2025-03-23 00:10:35 - testbed-volume-7-node-1 2025-03-23 00:10:35.109939 | orchestrator | 2025-03-23 00:10:35 - testbed-volume-4-node-4 2025-03-23 00:10:35.149306 | orchestrator | 2025-03-23 00:10:35 - disconnect routers 2025-03-23 00:10:35.249788 | orchestrator | 2025-03-23 00:10:35 - testbed 2025-03-23 00:10:36.033037 | orchestrator | 2025-03-23 00:10:36 - clean up subnets 2025-03-23 00:10:36.064134 | orchestrator | 2025-03-23 00:10:36 - subnet-testbed-management 2025-03-23 00:10:36.176632 | orchestrator | 2025-03-23 00:10:36 - clean up networks 2025-03-23 00:10:36.366125 | orchestrator | 2025-03-23 00:10:36 - net-testbed-management 2025-03-23 00:10:36.655084 | orchestrator | 2025-03-23 00:10:36 - clean up security groups 2025-03-23 00:10:36.686523 | orchestrator | 2025-03-23 00:10:36 - testbed-management 2025-03-23 00:10:36.780189 | orchestrator | 2025-03-23 00:10:36 - testbed-node 2025-03-23 00:10:36.865358 | orchestrator | 2025-03-23 00:10:36 - clean up floating ips 2025-03-23 00:10:36.893537 | orchestrator | 2025-03-23 00:10:36 - 81.163.193.215 2025-03-23 00:10:37.269622 | orchestrator | 2025-03-23 00:10:37 - clean up routers 2025-03-23 00:10:37.373984 | orchestrator | 2025-03-23 00:10:37 - testbed 2025-03-23 00:10:38.200002 | orchestrator | changed 2025-03-23 00:10:38.239022 | 2025-03-23 00:10:38.239132 | PLAY RECAP 2025-03-23 00:10:38.239187 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-03-23 00:10:38.239213 | 2025-03-23 00:10:38.368781 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-23 00:10:38.372881 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-23 00:10:39.082953 | 2025-03-23 00:10:39.083125 | PLAY [Base post-fetch] 2025-03-23 00:10:39.113653 | 2025-03-23 00:10:39.113835 | TASK [fetch-output : Set log path for multiple nodes] 2025-03-23 00:10:39.179965 | orchestrator | skipping: Conditional result was False 2025-03-23 00:10:39.188425 | 2025-03-23 00:10:39.188583 | TASK [fetch-output : Set log path for single node] 2025-03-23 00:10:39.242412 | orchestrator | ok 2025-03-23 00:10:39.262234 | 2025-03-23 00:10:39.262369 | LOOP [fetch-output : Ensure local output dirs] 2025-03-23 00:10:39.793877 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/work/logs" 2025-03-23 00:10:40.086936 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/work/artifacts" 2025-03-23 00:10:40.350638 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0ad214290bf44f7ebea1a2f7a3cd85b0/work/docs" 2025-03-23 00:10:40.365618 | 2025-03-23 00:10:40.365821 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-03-23 00:10:41.192004 | orchestrator | changed: .d..t...... ./ 2025-03-23 00:10:41.192321 | orchestrator | changed: All items complete 2025-03-23 00:10:41.192370 | 2025-03-23 00:10:41.786495 | orchestrator | changed: .d..t...... ./ 2025-03-23 00:10:42.329586 | orchestrator | changed: .d..t...... ./ 2025-03-23 00:10:42.357441 | 2025-03-23 00:10:42.357597 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-03-23 00:10:42.393186 | orchestrator | skipping: Conditional result was False 2025-03-23 00:10:42.403394 | orchestrator | skipping: Conditional result was False 2025-03-23 00:10:42.454773 | 2025-03-23 00:10:42.454867 | PLAY RECAP 2025-03-23 00:10:42.454926 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-03-23 00:10:42.454955 | 2025-03-23 00:10:42.565099 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-23 00:10:42.570567 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-23 00:10:43.262857 | 2025-03-23 00:10:43.263010 | PLAY [Base post] 2025-03-23 00:10:43.306938 | 2025-03-23 00:10:43.307076 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-03-23 00:10:44.083229 | orchestrator | changed 2025-03-23 00:10:44.157140 | 2025-03-23 00:10:44.157312 | PLAY RECAP 2025-03-23 00:10:44.157410 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-03-23 00:10:44.157497 | 2025-03-23 00:10:44.274092 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-23 00:10:44.281403 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-03-23 00:10:45.020858 | 2025-03-23 00:10:45.021008 | PLAY [Base post-logs] 2025-03-23 00:10:45.037499 | 2025-03-23 00:10:45.037621 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-03-23 00:10:45.485352 | localhost | changed 2025-03-23 00:10:45.489300 | 2025-03-23 00:10:45.489434 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-03-23 00:10:45.516407 | localhost | ok 2025-03-23 00:10:45.522013 | 2025-03-23 00:10:45.522114 | TASK [Set zuul-log-path fact] 2025-03-23 00:10:45.539190 | localhost | ok 2025-03-23 00:10:45.549217 | 2025-03-23 00:10:45.549324 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-23 00:10:45.575622 | localhost | ok 2025-03-23 00:10:45.581119 | 2025-03-23 00:10:45.581222 | TASK [upload-logs : Create log directories] 2025-03-23 00:10:46.122165 | localhost | changed 2025-03-23 00:10:46.127686 | 2025-03-23 00:10:46.127900 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-03-23 00:10:46.676994 | localhost -> localhost | ok: Runtime: 0:00:00.008196 2025-03-23 00:10:46.687597 | 2025-03-23 00:10:46.687801 | TASK [upload-logs : Upload logs to log server] 2025-03-23 00:10:47.274116 | localhost | Output suppressed because no_log was given 2025-03-23 00:10:47.280963 | 2025-03-23 00:10:47.281141 | LOOP [upload-logs : Compress console log and json output] 2025-03-23 00:10:47.357602 | localhost | skipping: Conditional result was False 2025-03-23 00:10:47.375851 | localhost | skipping: Conditional result was False 2025-03-23 00:10:47.385890 | 2025-03-23 00:10:47.386056 | LOOP [upload-logs : Upload compressed console log and json output] 2025-03-23 00:10:47.449536 | localhost | skipping: Conditional result was False 2025-03-23 00:10:47.450233 | 2025-03-23 00:10:47.462735 | localhost | skipping: Conditional result was False 2025-03-23 00:10:47.472507 | 2025-03-23 00:10:47.472695 | LOOP [upload-logs : Upload console log and json output]